00:00:00.001 Started by upstream project "autotest-per-patch" build number 127218 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:06.067 The recommended git tool is: git 00:00:06.068 using credential 00000000-0000-0000-0000-000000000002 00:00:06.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.082 Fetching changes from the remote Git repository 00:00:06.084 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.097 Using shallow fetch with depth 1 00:00:06.097 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.097 > git --version # timeout=10 00:00:06.111 > git --version # 'git version 2.39.2' 00:00:06.111 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.122 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.122 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.868 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.880 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.894 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:11.894 > git config core.sparsecheckout # timeout=10 00:00:11.906 > git read-tree -mu HEAD # timeout=10 00:00:11.923 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:11.944 Commit message: "packer: Add bios builder" 00:00:11.945 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:12.025 [Pipeline] Start of Pipeline 00:00:12.039 [Pipeline] library 00:00:12.041 Loading library shm_lib@master 00:00:12.041 Library shm_lib@master is cached. Copying from home. 00:00:12.058 [Pipeline] node 00:00:12.070 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:12.072 [Pipeline] { 00:00:12.084 [Pipeline] catchError 00:00:12.086 [Pipeline] { 00:00:12.101 [Pipeline] wrap 00:00:12.111 [Pipeline] { 00:00:12.121 [Pipeline] stage 00:00:12.124 [Pipeline] { (Prologue) 00:00:12.317 [Pipeline] sh 00:00:12.600 + logger -p user.info -t JENKINS-CI 00:00:12.617 [Pipeline] echo 00:00:12.619 Node: GP8 00:00:12.628 [Pipeline] sh 00:00:12.923 [Pipeline] setCustomBuildProperty 00:00:12.932 [Pipeline] echo 00:00:12.933 Cleanup processes 00:00:12.936 [Pipeline] sh 00:00:13.215 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.215 2304535 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.233 [Pipeline] sh 00:00:13.585 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.585 ++ grep -v 'sudo pgrep' 00:00:13.585 ++ awk '{print $1}' 00:00:13.585 + sudo kill -9 00:00:13.585 + true 00:00:13.601 [Pipeline] cleanWs 00:00:13.613 [WS-CLEANUP] Deleting project workspace... 00:00:13.613 [WS-CLEANUP] Deferred wipeout is used... 00:00:13.620 [WS-CLEANUP] done 00:00:13.629 [Pipeline] setCustomBuildProperty 00:00:13.647 [Pipeline] sh 00:00:13.927 + sudo git config --global --replace-all safe.directory '*' 00:00:14.006 [Pipeline] httpRequest 00:00:14.027 [Pipeline] echo 00:00:14.028 Sorcerer 10.211.164.101 is alive 00:00:14.034 [Pipeline] httpRequest 00:00:14.039 HttpMethod: GET 00:00:14.039 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:14.040 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:14.045 Response Code: HTTP/1.1 200 OK 00:00:14.045 Success: Status code 200 is in the accepted range: 200,404 00:00:14.046 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:27.923 [Pipeline] sh 00:00:28.207 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:28.224 [Pipeline] httpRequest 00:00:28.245 [Pipeline] echo 00:00:28.247 Sorcerer 10.211.164.101 is alive 00:00:28.256 [Pipeline] httpRequest 00:00:28.261 HttpMethod: GET 00:00:28.262 URL: http://10.211.164.101/packages/spdk_dcc54343ad1d5ec78d7305947940467c78cd7fa3.tar.gz 00:00:28.263 Sending request to url: http://10.211.164.101/packages/spdk_dcc54343ad1d5ec78d7305947940467c78cd7fa3.tar.gz 00:00:28.267 Response Code: HTTP/1.1 200 OK 00:00:28.268 Success: Status code 200 is in the accepted range: 200,404 00:00:28.268 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dcc54343ad1d5ec78d7305947940467c78cd7fa3.tar.gz 00:03:16.505 [Pipeline] sh 00:03:16.781 + tar --no-same-owner -xf spdk_dcc54343ad1d5ec78d7305947940467c78cd7fa3.tar.gz 00:03:20.075 [Pipeline] sh 00:03:20.359 + git -C spdk log --oneline -n5 00:03:20.359 dcc54343a module/accel/dsa: add DIX Verify 00:03:20.359 be4567807 lib/idxd: add DIX generate 00:03:20.359 1beb86cd6 lib/idxd: add descriptors for DIX generate 00:03:20.359 477912bde lib/accel: add spdk_accel_append_dix_generate/verify 00:03:20.359 325310f6a accel_perf: add support for DIX Generate/Verify 00:03:20.370 [Pipeline] } 00:03:20.389 [Pipeline] // stage 00:03:20.398 [Pipeline] stage 00:03:20.401 [Pipeline] { (Prepare) 00:03:20.419 [Pipeline] writeFile 00:03:20.440 [Pipeline] sh 00:03:20.719 + logger -p user.info -t JENKINS-CI 00:03:20.739 [Pipeline] sh 00:03:21.020 + logger -p user.info -t JENKINS-CI 00:03:21.032 [Pipeline] sh 00:03:21.309 + cat autorun-spdk.conf 00:03:21.309 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:21.309 SPDK_TEST_NVMF=1 00:03:21.309 SPDK_TEST_NVME_CLI=1 00:03:21.309 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:21.309 SPDK_TEST_NVMF_NICS=e810 00:03:21.309 SPDK_TEST_VFIOUSER=1 00:03:21.309 SPDK_RUN_UBSAN=1 00:03:21.309 NET_TYPE=phy 00:03:21.316 RUN_NIGHTLY=0 00:03:21.323 [Pipeline] readFile 00:03:21.351 [Pipeline] withEnv 00:03:21.354 [Pipeline] { 00:03:21.368 [Pipeline] sh 00:03:21.650 + set -ex 00:03:21.650 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:21.650 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:21.650 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:21.650 ++ SPDK_TEST_NVMF=1 00:03:21.650 ++ SPDK_TEST_NVME_CLI=1 00:03:21.650 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:21.650 ++ SPDK_TEST_NVMF_NICS=e810 00:03:21.650 ++ SPDK_TEST_VFIOUSER=1 00:03:21.650 ++ SPDK_RUN_UBSAN=1 00:03:21.650 ++ NET_TYPE=phy 00:03:21.650 ++ RUN_NIGHTLY=0 00:03:21.650 + case $SPDK_TEST_NVMF_NICS in 00:03:21.650 + DRIVERS=ice 00:03:21.650 + [[ tcp == \r\d\m\a ]] 00:03:21.650 + [[ -n ice ]] 00:03:21.650 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:21.650 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:21.650 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:21.650 rmmod: ERROR: Module irdma is not currently loaded 00:03:21.650 rmmod: ERROR: Module i40iw is not currently loaded 00:03:21.650 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:21.650 + true 00:03:21.650 + for D in $DRIVERS 00:03:21.650 + sudo modprobe ice 00:03:21.650 + exit 0 00:03:21.660 [Pipeline] } 00:03:21.678 [Pipeline] // withEnv 00:03:21.683 [Pipeline] } 00:03:21.698 [Pipeline] // stage 00:03:21.706 [Pipeline] catchError 00:03:21.708 [Pipeline] { 00:03:21.723 [Pipeline] timeout 00:03:21.724 Timeout set to expire in 50 min 00:03:21.726 [Pipeline] { 00:03:21.744 [Pipeline] stage 00:03:21.746 [Pipeline] { (Tests) 00:03:21.763 [Pipeline] sh 00:03:22.044 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:22.045 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:22.045 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:22.045 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:22.045 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.045 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:22.045 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:22.045 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:22.045 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:22.045 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:22.045 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:22.045 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:22.045 + source /etc/os-release 00:03:22.045 ++ NAME='Fedora Linux' 00:03:22.045 ++ VERSION='38 (Cloud Edition)' 00:03:22.045 ++ ID=fedora 00:03:22.045 ++ VERSION_ID=38 00:03:22.045 ++ VERSION_CODENAME= 00:03:22.045 ++ PLATFORM_ID=platform:f38 00:03:22.045 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:22.045 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:22.045 ++ LOGO=fedora-logo-icon 00:03:22.045 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:22.045 ++ HOME_URL=https://fedoraproject.org/ 00:03:22.045 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:22.045 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:22.045 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:22.045 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:22.045 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:22.045 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:22.045 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:22.045 ++ SUPPORT_END=2024-05-14 00:03:22.045 ++ VARIANT='Cloud Edition' 00:03:22.045 ++ VARIANT_ID=cloud 00:03:22.045 + uname -a 00:03:22.045 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:22.045 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:23.421 Hugepages 00:03:23.421 node hugesize free / total 00:03:23.421 node0 1048576kB 0 / 0 00:03:23.421 node0 2048kB 0 / 0 00:03:23.421 node1 1048576kB 0 / 0 00:03:23.421 node1 2048kB 0 / 0 00:03:23.421 00:03:23.421 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:23.421 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:23.421 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:23.421 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:23.421 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:23.421 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:23.421 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:23.421 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:23.421 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:23.421 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:23.421 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:23.421 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:23.421 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:23.421 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:23.421 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:23.421 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:23.421 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:23.421 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:23.421 + rm -f /tmp/spdk-ld-path 00:03:23.421 + source autorun-spdk.conf 00:03:23.421 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:23.421 ++ SPDK_TEST_NVMF=1 00:03:23.422 ++ SPDK_TEST_NVME_CLI=1 00:03:23.422 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:23.422 ++ SPDK_TEST_NVMF_NICS=e810 00:03:23.422 ++ SPDK_TEST_VFIOUSER=1 00:03:23.422 ++ SPDK_RUN_UBSAN=1 00:03:23.422 ++ NET_TYPE=phy 00:03:23.422 ++ RUN_NIGHTLY=0 00:03:23.422 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:23.422 + [[ -n '' ]] 00:03:23.422 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.422 + for M in /var/spdk/build-*-manifest.txt 00:03:23.422 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:23.422 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:23.422 + for M in /var/spdk/build-*-manifest.txt 00:03:23.422 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:23.422 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:23.422 ++ uname 00:03:23.422 + [[ Linux == \L\i\n\u\x ]] 00:03:23.422 + sudo dmesg -T 00:03:23.681 + sudo dmesg --clear 00:03:23.681 + dmesg_pid=2305859 00:03:23.681 + [[ Fedora Linux == FreeBSD ]] 00:03:23.681 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:23.681 + sudo dmesg -Tw 00:03:23.681 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:23.681 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:23.681 + [[ -x /usr/src/fio-static/fio ]] 00:03:23.681 + export FIO_BIN=/usr/src/fio-static/fio 00:03:23.681 + FIO_BIN=/usr/src/fio-static/fio 00:03:23.681 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:23.681 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:23.681 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:23.681 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:23.681 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:23.681 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:23.681 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:23.681 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:23.681 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:23.681 Test configuration: 00:03:23.681 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:23.681 SPDK_TEST_NVMF=1 00:03:23.681 SPDK_TEST_NVME_CLI=1 00:03:23.681 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:23.681 SPDK_TEST_NVMF_NICS=e810 00:03:23.681 SPDK_TEST_VFIOUSER=1 00:03:23.681 SPDK_RUN_UBSAN=1 00:03:23.681 NET_TYPE=phy 00:03:23.681 RUN_NIGHTLY=0 13:57:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:23.681 13:57:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:23.681 13:57:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.681 13:57:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.681 13:57:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.681 13:57:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.681 13:57:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.681 13:57:40 -- paths/export.sh@5 -- $ export PATH 00:03:23.681 13:57:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.681 13:57:40 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:23.681 13:57:40 -- common/autobuild_common.sh@447 -- $ date +%s 00:03:23.681 13:57:40 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721995060.XXXXXX 00:03:23.681 13:57:40 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721995060.oTsvNs 00:03:23.681 13:57:40 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:03:23.681 13:57:40 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:03:23.681 13:57:40 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:23.681 13:57:40 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:23.681 13:57:40 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:23.681 13:57:40 -- common/autobuild_common.sh@463 -- $ get_config_params 00:03:23.681 13:57:40 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:03:23.681 13:57:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.681 13:57:40 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:23.681 13:57:40 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:03:23.681 13:57:40 -- pm/common@17 -- $ local monitor 00:03:23.681 13:57:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.681 13:57:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.681 13:57:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.681 13:57:40 -- pm/common@21 -- $ date +%s 00:03:23.681 13:57:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.681 13:57:40 -- pm/common@21 -- $ date +%s 00:03:23.681 13:57:40 -- pm/common@25 -- $ sleep 1 00:03:23.681 13:57:40 -- pm/common@21 -- $ date +%s 00:03:23.681 13:57:40 -- pm/common@21 -- $ date +%s 00:03:23.681 13:57:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721995060 00:03:23.681 13:57:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721995060 00:03:23.681 13:57:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721995060 00:03:23.681 13:57:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721995060 00:03:23.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721995060_collect-vmstat.pm.log 00:03:23.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721995060_collect-cpu-load.pm.log 00:03:23.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721995060_collect-cpu-temp.pm.log 00:03:23.681 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721995060_collect-bmc-pm.bmc.pm.log 00:03:24.616 13:57:41 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:03:24.616 13:57:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:24.616 13:57:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:24.616 13:57:41 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:24.616 13:57:41 -- spdk/autobuild.sh@16 -- $ date -u 00:03:24.616 Fri Jul 26 11:57:41 AM UTC 2024 00:03:24.616 13:57:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:24.616 v24.09-pre-329-gdcc54343a 00:03:24.616 13:57:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:24.616 13:57:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:24.616 13:57:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:24.616 13:57:41 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:24.616 13:57:41 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:24.616 13:57:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:24.616 ************************************ 00:03:24.616 START TEST ubsan 00:03:24.616 ************************************ 00:03:24.616 13:57:41 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:24.616 using ubsan 00:03:24.616 00:03:24.616 real 0m0.000s 00:03:24.616 user 0m0.000s 00:03:24.616 sys 0m0.000s 00:03:24.616 13:57:41 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:24.616 13:57:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:24.616 ************************************ 00:03:24.616 END TEST ubsan 00:03:24.616 ************************************ 00:03:24.874 13:57:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:24.874 13:57:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:24.874 13:57:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:24.874 13:57:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:24.874 13:57:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:24.874 13:57:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:24.874 13:57:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:24.874 13:57:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:24.874 13:57:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:24.874 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:24.874 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:25.442 Using 'verbs' RDMA provider 00:03:41.250 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:56.157 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:56.157 Creating mk/config.mk...done. 00:03:56.157 Creating mk/cc.flags.mk...done. 00:03:56.157 Type 'make' to build. 00:03:56.157 13:58:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:56.157 13:58:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:56.157 13:58:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:56.157 13:58:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.157 ************************************ 00:03:56.157 START TEST make 00:03:56.157 ************************************ 00:03:56.157 13:58:11 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:56.157 make[1]: Nothing to be done for 'all'. 00:03:57.116 The Meson build system 00:03:57.117 Version: 1.3.1 00:03:57.117 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:57.117 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:57.117 Build type: native build 00:03:57.117 Project name: libvfio-user 00:03:57.117 Project version: 0.0.1 00:03:57.117 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:57.117 C linker for the host machine: cc ld.bfd 2.39-16 00:03:57.117 Host machine cpu family: x86_64 00:03:57.117 Host machine cpu: x86_64 00:03:57.117 Run-time dependency threads found: YES 00:03:57.117 Library dl found: YES 00:03:57.117 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:57.117 Run-time dependency json-c found: YES 0.17 00:03:57.117 Run-time dependency cmocka found: YES 1.1.7 00:03:57.117 Program pytest-3 found: NO 00:03:57.117 Program flake8 found: NO 00:03:57.117 Program misspell-fixer found: NO 00:03:57.117 Program restructuredtext-lint found: NO 00:03:57.117 Program valgrind found: YES (/usr/bin/valgrind) 00:03:57.117 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:57.117 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:57.117 Compiler for C supports arguments -Wwrite-strings: YES 00:03:57.117 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:57.117 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:57.117 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:57.117 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:57.117 Build targets in project: 8 00:03:57.117 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:57.117 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:57.117 00:03:57.117 libvfio-user 0.0.1 00:03:57.117 00:03:57.117 User defined options 00:03:57.117 buildtype : debug 00:03:57.117 default_library: shared 00:03:57.117 libdir : /usr/local/lib 00:03:57.117 00:03:57.117 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:58.064 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:58.064 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:58.064 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:58.064 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:58.064 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:58.064 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:58.064 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:58.064 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:58.064 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:58.064 [9/37] Compiling C object samples/null.p/null.c.o 00:03:58.064 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:58.064 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:58.326 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:58.326 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:58.326 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:58.326 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:58.326 [16/37] Compiling C object samples/server.p/server.c.o 00:03:58.326 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:58.326 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:58.326 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:58.326 [20/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:58.326 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:58.326 [22/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:58.326 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:58.326 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:58.326 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:58.326 [26/37] Compiling C object samples/client.p/client.c.o 00:03:58.587 [27/37] Linking target samples/client 00:03:58.587 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:58.587 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:58.587 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:58.587 [31/37] Linking target test/unit_tests 00:03:58.852 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:58.852 [33/37] Linking target samples/gpio-pci-idio-16 00:03:58.852 [34/37] Linking target samples/lspci 00:03:58.852 [35/37] Linking target samples/null 00:03:58.852 [36/37] Linking target samples/server 00:03:58.852 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:58.852 INFO: autodetecting backend as ninja 00:03:58.852 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:58.852 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:59.809 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:59.809 ninja: no work to do. 00:04:05.086 The Meson build system 00:04:05.086 Version: 1.3.1 00:04:05.086 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:05.086 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:05.086 Build type: native build 00:04:05.086 Program cat found: YES (/usr/bin/cat) 00:04:05.086 Project name: DPDK 00:04:05.086 Project version: 24.03.0 00:04:05.086 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:05.086 C linker for the host machine: cc ld.bfd 2.39-16 00:04:05.086 Host machine cpu family: x86_64 00:04:05.086 Host machine cpu: x86_64 00:04:05.086 Message: ## Building in Developer Mode ## 00:04:05.086 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:05.086 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:05.086 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:05.086 Program python3 found: YES (/usr/bin/python3) 00:04:05.086 Program cat found: YES (/usr/bin/cat) 00:04:05.086 Compiler for C supports arguments -march=native: YES 00:04:05.086 Checking for size of "void *" : 8 00:04:05.086 Checking for size of "void *" : 8 (cached) 00:04:05.086 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:04:05.086 Library m found: YES 00:04:05.086 Library numa found: YES 00:04:05.086 Has header "numaif.h" : YES 00:04:05.086 Library fdt found: NO 00:04:05.086 Library execinfo found: NO 00:04:05.086 Has header "execinfo.h" : YES 00:04:05.086 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:05.086 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:05.086 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:05.086 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:05.086 Run-time dependency openssl found: YES 3.0.9 00:04:05.086 Run-time dependency libpcap found: YES 1.10.4 00:04:05.086 Has header "pcap.h" with dependency libpcap: YES 00:04:05.086 Compiler for C supports arguments -Wcast-qual: YES 00:04:05.086 Compiler for C supports arguments -Wdeprecated: YES 00:04:05.086 Compiler for C supports arguments -Wformat: YES 00:04:05.086 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:05.086 Compiler for C supports arguments -Wformat-security: NO 00:04:05.086 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:05.086 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:05.086 Compiler for C supports arguments -Wnested-externs: YES 00:04:05.086 Compiler for C supports arguments -Wold-style-definition: YES 00:04:05.086 Compiler for C supports arguments -Wpointer-arith: YES 00:04:05.086 Compiler for C supports arguments -Wsign-compare: YES 00:04:05.086 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:05.086 Compiler for C supports arguments -Wundef: YES 00:04:05.086 Compiler for C supports arguments -Wwrite-strings: YES 00:04:05.086 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:05.086 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:05.086 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:05.086 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:05.086 Program objdump found: YES (/usr/bin/objdump) 00:04:05.086 Compiler for C supports arguments -mavx512f: YES 00:04:05.086 Checking if "AVX512 checking" compiles: YES 00:04:05.086 Fetching value of define "__SSE4_2__" : 1 00:04:05.086 Fetching value of define "__AES__" : 1 00:04:05.086 Fetching value of define "__AVX__" : 1 00:04:05.086 Fetching value of define "__AVX2__" : (undefined) 00:04:05.086 Fetching value of define "__AVX512BW__" : (undefined) 00:04:05.086 Fetching value of define "__AVX512CD__" : (undefined) 00:04:05.086 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:05.086 Fetching value of define "__AVX512F__" : (undefined) 00:04:05.086 Fetching value of define "__AVX512VL__" : (undefined) 00:04:05.086 Fetching value of define "__PCLMUL__" : 1 00:04:05.086 Fetching value of define "__RDRND__" : 1 00:04:05.086 Fetching value of define "__RDSEED__" : (undefined) 00:04:05.086 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:05.086 Fetching value of define "__znver1__" : (undefined) 00:04:05.086 Fetching value of define "__znver2__" : (undefined) 00:04:05.086 Fetching value of define "__znver3__" : (undefined) 00:04:05.086 Fetching value of define "__znver4__" : (undefined) 00:04:05.086 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:05.086 Message: lib/log: Defining dependency "log" 00:04:05.086 Message: lib/kvargs: Defining dependency "kvargs" 00:04:05.086 Message: lib/telemetry: Defining dependency "telemetry" 00:04:05.086 Checking for function "getentropy" : NO 00:04:05.086 Message: lib/eal: Defining dependency "eal" 00:04:05.086 Message: lib/ring: Defining dependency "ring" 00:04:05.086 Message: lib/rcu: Defining dependency "rcu" 00:04:05.086 Message: lib/mempool: Defining dependency "mempool" 00:04:05.086 Message: lib/mbuf: Defining dependency "mbuf" 00:04:05.086 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:05.086 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:05.086 Compiler for C supports arguments -mpclmul: YES 00:04:05.086 Compiler for C supports arguments -maes: YES 00:04:05.086 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:05.086 Compiler for C supports arguments -mavx512bw: YES 00:04:05.086 Compiler for C supports arguments -mavx512dq: YES 00:04:05.086 Compiler for C supports arguments -mavx512vl: YES 00:04:05.086 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:05.086 Compiler for C supports arguments -mavx2: YES 00:04:05.086 Compiler for C supports arguments -mavx: YES 00:04:05.086 Message: lib/net: Defining dependency "net" 00:04:05.086 Message: lib/meter: Defining dependency "meter" 00:04:05.086 Message: lib/ethdev: Defining dependency "ethdev" 00:04:05.086 Message: lib/pci: Defining dependency "pci" 00:04:05.086 Message: lib/cmdline: Defining dependency "cmdline" 00:04:05.086 Message: lib/hash: Defining dependency "hash" 00:04:05.086 Message: lib/timer: Defining dependency "timer" 00:04:05.086 Message: lib/compressdev: Defining dependency "compressdev" 00:04:05.087 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:05.087 Message: lib/dmadev: Defining dependency "dmadev" 00:04:05.087 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:05.087 Message: lib/power: Defining dependency "power" 00:04:05.087 Message: lib/reorder: Defining dependency "reorder" 00:04:05.087 Message: lib/security: Defining dependency "security" 00:04:05.087 Has header "linux/userfaultfd.h" : YES 00:04:05.087 Has header "linux/vduse.h" : YES 00:04:05.087 Message: lib/vhost: Defining dependency "vhost" 00:04:05.087 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:05.087 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:05.087 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:05.087 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:05.087 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:05.087 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:05.087 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:05.087 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:05.087 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:05.087 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:05.087 Program doxygen found: YES (/usr/bin/doxygen) 00:04:05.087 Configuring doxy-api-html.conf using configuration 00:04:05.087 Configuring doxy-api-man.conf using configuration 00:04:05.087 Program mandb found: YES (/usr/bin/mandb) 00:04:05.087 Program sphinx-build found: NO 00:04:05.087 Configuring rte_build_config.h using configuration 00:04:05.087 Message: 00:04:05.087 ================= 00:04:05.087 Applications Enabled 00:04:05.087 ================= 00:04:05.087 00:04:05.087 apps: 00:04:05.087 00:04:05.087 00:04:05.087 Message: 00:04:05.087 ================= 00:04:05.087 Libraries Enabled 00:04:05.087 ================= 00:04:05.087 00:04:05.087 libs: 00:04:05.087 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:05.087 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:05.087 cryptodev, dmadev, power, reorder, security, vhost, 00:04:05.087 00:04:05.087 Message: 00:04:05.087 =============== 00:04:05.087 Drivers Enabled 00:04:05.087 =============== 00:04:05.087 00:04:05.087 common: 00:04:05.087 00:04:05.087 bus: 00:04:05.087 pci, vdev, 00:04:05.087 mempool: 00:04:05.087 ring, 00:04:05.087 dma: 00:04:05.087 00:04:05.087 net: 00:04:05.087 00:04:05.087 crypto: 00:04:05.087 00:04:05.087 compress: 00:04:05.087 00:04:05.087 vdpa: 00:04:05.087 00:04:05.087 00:04:05.087 Message: 00:04:05.087 ================= 00:04:05.087 Content Skipped 00:04:05.087 ================= 00:04:05.087 00:04:05.087 apps: 00:04:05.087 dumpcap: explicitly disabled via build config 00:04:05.087 graph: explicitly disabled via build config 00:04:05.087 pdump: explicitly disabled via build config 00:04:05.087 proc-info: explicitly disabled via build config 00:04:05.087 test-acl: explicitly disabled via build config 00:04:05.087 test-bbdev: explicitly disabled via build config 00:04:05.087 test-cmdline: explicitly disabled via build config 00:04:05.087 test-compress-perf: explicitly disabled via build config 00:04:05.087 test-crypto-perf: explicitly disabled via build config 00:04:05.087 test-dma-perf: explicitly disabled via build config 00:04:05.087 test-eventdev: explicitly disabled via build config 00:04:05.087 test-fib: explicitly disabled via build config 00:04:05.087 test-flow-perf: explicitly disabled via build config 00:04:05.087 test-gpudev: explicitly disabled via build config 00:04:05.087 test-mldev: explicitly disabled via build config 00:04:05.087 test-pipeline: explicitly disabled via build config 00:04:05.087 test-pmd: explicitly disabled via build config 00:04:05.087 test-regex: explicitly disabled via build config 00:04:05.087 test-sad: explicitly disabled via build config 00:04:05.087 test-security-perf: explicitly disabled via build config 00:04:05.087 00:04:05.087 libs: 00:04:05.087 argparse: explicitly disabled via build config 00:04:05.087 metrics: explicitly disabled via build config 00:04:05.087 acl: explicitly disabled via build config 00:04:05.087 bbdev: explicitly disabled via build config 00:04:05.087 bitratestats: explicitly disabled via build config 00:04:05.087 bpf: explicitly disabled via build config 00:04:05.087 cfgfile: explicitly disabled via build config 00:04:05.087 distributor: explicitly disabled via build config 00:04:05.087 efd: explicitly disabled via build config 00:04:05.087 eventdev: explicitly disabled via build config 00:04:05.087 dispatcher: explicitly disabled via build config 00:04:05.087 gpudev: explicitly disabled via build config 00:04:05.087 gro: explicitly disabled via build config 00:04:05.087 gso: explicitly disabled via build config 00:04:05.087 ip_frag: explicitly disabled via build config 00:04:05.087 jobstats: explicitly disabled via build config 00:04:05.087 latencystats: explicitly disabled via build config 00:04:05.087 lpm: explicitly disabled via build config 00:04:05.087 member: explicitly disabled via build config 00:04:05.087 pcapng: explicitly disabled via build config 00:04:05.087 rawdev: explicitly disabled via build config 00:04:05.087 regexdev: explicitly disabled via build config 00:04:05.087 mldev: explicitly disabled via build config 00:04:05.087 rib: explicitly disabled via build config 00:04:05.087 sched: explicitly disabled via build config 00:04:05.087 stack: explicitly disabled via build config 00:04:05.087 ipsec: explicitly disabled via build config 00:04:05.087 pdcp: explicitly disabled via build config 00:04:05.087 fib: explicitly disabled via build config 00:04:05.087 port: explicitly disabled via build config 00:04:05.087 pdump: explicitly disabled via build config 00:04:05.087 table: explicitly disabled via build config 00:04:05.087 pipeline: explicitly disabled via build config 00:04:05.087 graph: explicitly disabled via build config 00:04:05.087 node: explicitly disabled via build config 00:04:05.087 00:04:05.087 drivers: 00:04:05.087 common/cpt: not in enabled drivers build config 00:04:05.087 common/dpaax: not in enabled drivers build config 00:04:05.087 common/iavf: not in enabled drivers build config 00:04:05.087 common/idpf: not in enabled drivers build config 00:04:05.087 common/ionic: not in enabled drivers build config 00:04:05.087 common/mvep: not in enabled drivers build config 00:04:05.087 common/octeontx: not in enabled drivers build config 00:04:05.087 bus/auxiliary: not in enabled drivers build config 00:04:05.087 bus/cdx: not in enabled drivers build config 00:04:05.087 bus/dpaa: not in enabled drivers build config 00:04:05.087 bus/fslmc: not in enabled drivers build config 00:04:05.087 bus/ifpga: not in enabled drivers build config 00:04:05.087 bus/platform: not in enabled drivers build config 00:04:05.087 bus/uacce: not in enabled drivers build config 00:04:05.087 bus/vmbus: not in enabled drivers build config 00:04:05.087 common/cnxk: not in enabled drivers build config 00:04:05.087 common/mlx5: not in enabled drivers build config 00:04:05.087 common/nfp: not in enabled drivers build config 00:04:05.087 common/nitrox: not in enabled drivers build config 00:04:05.087 common/qat: not in enabled drivers build config 00:04:05.087 common/sfc_efx: not in enabled drivers build config 00:04:05.088 mempool/bucket: not in enabled drivers build config 00:04:05.088 mempool/cnxk: not in enabled drivers build config 00:04:05.088 mempool/dpaa: not in enabled drivers build config 00:04:05.088 mempool/dpaa2: not in enabled drivers build config 00:04:05.088 mempool/octeontx: not in enabled drivers build config 00:04:05.088 mempool/stack: not in enabled drivers build config 00:04:05.088 dma/cnxk: not in enabled drivers build config 00:04:05.088 dma/dpaa: not in enabled drivers build config 00:04:05.088 dma/dpaa2: not in enabled drivers build config 00:04:05.088 dma/hisilicon: not in enabled drivers build config 00:04:05.088 dma/idxd: not in enabled drivers build config 00:04:05.088 dma/ioat: not in enabled drivers build config 00:04:05.088 dma/skeleton: not in enabled drivers build config 00:04:05.088 net/af_packet: not in enabled drivers build config 00:04:05.088 net/af_xdp: not in enabled drivers build config 00:04:05.088 net/ark: not in enabled drivers build config 00:04:05.088 net/atlantic: not in enabled drivers build config 00:04:05.088 net/avp: not in enabled drivers build config 00:04:05.088 net/axgbe: not in enabled drivers build config 00:04:05.088 net/bnx2x: not in enabled drivers build config 00:04:05.088 net/bnxt: not in enabled drivers build config 00:04:05.088 net/bonding: not in enabled drivers build config 00:04:05.088 net/cnxk: not in enabled drivers build config 00:04:05.088 net/cpfl: not in enabled drivers build config 00:04:05.088 net/cxgbe: not in enabled drivers build config 00:04:05.088 net/dpaa: not in enabled drivers build config 00:04:05.088 net/dpaa2: not in enabled drivers build config 00:04:05.088 net/e1000: not in enabled drivers build config 00:04:05.088 net/ena: not in enabled drivers build config 00:04:05.088 net/enetc: not in enabled drivers build config 00:04:05.088 net/enetfec: not in enabled drivers build config 00:04:05.088 net/enic: not in enabled drivers build config 00:04:05.088 net/failsafe: not in enabled drivers build config 00:04:05.088 net/fm10k: not in enabled drivers build config 00:04:05.088 net/gve: not in enabled drivers build config 00:04:05.088 net/hinic: not in enabled drivers build config 00:04:05.088 net/hns3: not in enabled drivers build config 00:04:05.088 net/i40e: not in enabled drivers build config 00:04:05.088 net/iavf: not in enabled drivers build config 00:04:05.088 net/ice: not in enabled drivers build config 00:04:05.088 net/idpf: not in enabled drivers build config 00:04:05.088 net/igc: not in enabled drivers build config 00:04:05.088 net/ionic: not in enabled drivers build config 00:04:05.088 net/ipn3ke: not in enabled drivers build config 00:04:05.088 net/ixgbe: not in enabled drivers build config 00:04:05.088 net/mana: not in enabled drivers build config 00:04:05.088 net/memif: not in enabled drivers build config 00:04:05.088 net/mlx4: not in enabled drivers build config 00:04:05.088 net/mlx5: not in enabled drivers build config 00:04:05.088 net/mvneta: not in enabled drivers build config 00:04:05.088 net/mvpp2: not in enabled drivers build config 00:04:05.088 net/netvsc: not in enabled drivers build config 00:04:05.088 net/nfb: not in enabled drivers build config 00:04:05.088 net/nfp: not in enabled drivers build config 00:04:05.088 net/ngbe: not in enabled drivers build config 00:04:05.088 net/null: not in enabled drivers build config 00:04:05.088 net/octeontx: not in enabled drivers build config 00:04:05.088 net/octeon_ep: not in enabled drivers build config 00:04:05.088 net/pcap: not in enabled drivers build config 00:04:05.088 net/pfe: not in enabled drivers build config 00:04:05.088 net/qede: not in enabled drivers build config 00:04:05.088 net/ring: not in enabled drivers build config 00:04:05.088 net/sfc: not in enabled drivers build config 00:04:05.088 net/softnic: not in enabled drivers build config 00:04:05.088 net/tap: not in enabled drivers build config 00:04:05.088 net/thunderx: not in enabled drivers build config 00:04:05.088 net/txgbe: not in enabled drivers build config 00:04:05.088 net/vdev_netvsc: not in enabled drivers build config 00:04:05.088 net/vhost: not in enabled drivers build config 00:04:05.088 net/virtio: not in enabled drivers build config 00:04:05.088 net/vmxnet3: not in enabled drivers build config 00:04:05.088 raw/*: missing internal dependency, "rawdev" 00:04:05.088 crypto/armv8: not in enabled drivers build config 00:04:05.088 crypto/bcmfs: not in enabled drivers build config 00:04:05.088 crypto/caam_jr: not in enabled drivers build config 00:04:05.088 crypto/ccp: not in enabled drivers build config 00:04:05.088 crypto/cnxk: not in enabled drivers build config 00:04:05.088 crypto/dpaa_sec: not in enabled drivers build config 00:04:05.088 crypto/dpaa2_sec: not in enabled drivers build config 00:04:05.088 crypto/ipsec_mb: not in enabled drivers build config 00:04:05.088 crypto/mlx5: not in enabled drivers build config 00:04:05.088 crypto/mvsam: not in enabled drivers build config 00:04:05.088 crypto/nitrox: not in enabled drivers build config 00:04:05.088 crypto/null: not in enabled drivers build config 00:04:05.088 crypto/octeontx: not in enabled drivers build config 00:04:05.088 crypto/openssl: not in enabled drivers build config 00:04:05.088 crypto/scheduler: not in enabled drivers build config 00:04:05.088 crypto/uadk: not in enabled drivers build config 00:04:05.088 crypto/virtio: not in enabled drivers build config 00:04:05.088 compress/isal: not in enabled drivers build config 00:04:05.088 compress/mlx5: not in enabled drivers build config 00:04:05.088 compress/nitrox: not in enabled drivers build config 00:04:05.088 compress/octeontx: not in enabled drivers build config 00:04:05.088 compress/zlib: not in enabled drivers build config 00:04:05.088 regex/*: missing internal dependency, "regexdev" 00:04:05.088 ml/*: missing internal dependency, "mldev" 00:04:05.088 vdpa/ifc: not in enabled drivers build config 00:04:05.088 vdpa/mlx5: not in enabled drivers build config 00:04:05.088 vdpa/nfp: not in enabled drivers build config 00:04:05.088 vdpa/sfc: not in enabled drivers build config 00:04:05.088 event/*: missing internal dependency, "eventdev" 00:04:05.088 baseband/*: missing internal dependency, "bbdev" 00:04:05.088 gpu/*: missing internal dependency, "gpudev" 00:04:05.088 00:04:05.088 00:04:05.366 Build targets in project: 85 00:04:05.366 00:04:05.366 DPDK 24.03.0 00:04:05.366 00:04:05.366 User defined options 00:04:05.366 buildtype : debug 00:04:05.366 default_library : shared 00:04:05.366 libdir : lib 00:04:05.366 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:05.366 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:05.366 c_link_args : 00:04:05.366 cpu_instruction_set: native 00:04:05.366 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:04:05.366 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:04:05.366 enable_docs : false 00:04:05.366 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:05.366 enable_kmods : false 00:04:05.366 max_lcores : 128 00:04:05.366 tests : false 00:04:05.366 00:04:05.366 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:05.940 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:06.204 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:06.204 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:06.204 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:06.204 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:06.204 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:06.204 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:06.204 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:06.204 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:06.204 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:06.204 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:06.204 [11/268] Linking static target lib/librte_kvargs.a 00:04:06.204 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:06.204 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:06.204 [14/268] Linking static target lib/librte_log.a 00:04:06.204 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:06.204 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:06.777 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.038 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:07.038 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:07.038 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:07.038 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:07.038 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:07.038 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:07.038 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:07.038 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:07.038 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:07.038 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:07.038 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:07.038 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:07.038 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:07.038 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:07.038 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:07.038 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:07.038 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:07.038 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:07.038 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:07.038 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:07.038 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:07.038 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:07.038 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:07.038 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:07.038 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:07.038 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:07.303 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:07.303 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:07.303 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:07.303 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:07.303 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:07.303 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:07.303 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:07.303 [51/268] Linking static target lib/librte_telemetry.a 00:04:07.303 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:07.303 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:07.303 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:07.303 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:07.303 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:07.303 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:07.303 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:07.303 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:07.303 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:07.303 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:07.303 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:07.303 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:07.304 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:07.304 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.562 [66/268] Linking target lib/librte_log.so.24.1 00:04:07.562 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:07.824 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:07.824 [69/268] Linking static target lib/librte_pci.a 00:04:07.824 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:07.824 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:07.824 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:07.824 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:08.086 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:08.086 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:08.086 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:08.086 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:08.086 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:08.086 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:08.086 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:08.086 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:08.086 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:08.086 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:08.086 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:08.086 [85/268] Linking static target lib/librte_ring.a 00:04:08.086 [86/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:08.086 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:08.086 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:08.086 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:08.086 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:08.086 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:08.086 [92/268] Linking target lib/librte_kvargs.so.24.1 00:04:08.086 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:08.086 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:08.086 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:08.086 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:08.086 [97/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:08.086 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:08.350 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:08.350 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:08.350 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:08.350 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:08.350 [103/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:08.350 [104/268] Linking static target lib/librte_meter.a 00:04:08.350 [105/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.350 [106/268] Linking static target lib/librte_eal.a 00:04:08.350 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:08.350 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:08.350 [109/268] Linking target lib/librte_telemetry.so.24.1 00:04:08.350 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:08.350 [111/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:08.350 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:08.350 [113/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.612 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:08.612 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:08.612 [116/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:08.613 [117/268] Linking static target lib/librte_mempool.a 00:04:08.613 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:08.613 [119/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:08.613 [120/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:08.613 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:08.613 [122/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:08.613 [123/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:08.613 [124/268] Linking static target lib/librte_rcu.a 00:04:08.613 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:08.613 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:08.613 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:08.613 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:08.613 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:08.613 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:08.613 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:08.613 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:08.613 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:08.613 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.874 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:08.874 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:08.874 [137/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:08.874 [138/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:08.874 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:08.874 [140/268] Linking static target lib/librte_net.a 00:04:08.874 [141/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.874 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:08.874 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:09.136 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:09.136 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:09.136 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:09.136 [147/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:09.136 [148/268] Linking static target lib/librte_cmdline.a 00:04:09.136 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:09.136 [150/268] Linking static target lib/librte_timer.a 00:04:09.136 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:09.136 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:09.136 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:09.136 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:09.396 [155/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.396 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:09.396 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:09.396 [158/268] Linking static target lib/librte_dmadev.a 00:04:09.396 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:09.396 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.396 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:09.396 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:09.655 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:09.655 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:09.655 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:09.655 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.655 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:09.655 [168/268] Linking static target lib/librte_compressdev.a 00:04:09.655 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:09.655 [170/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.655 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:09.655 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:09.655 [173/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:09.655 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:09.655 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:09.655 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:09.655 [177/268] Linking static target lib/librte_hash.a 00:04:09.655 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:09.655 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:09.913 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:09.913 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:09.913 [182/268] Linking static target lib/librte_power.a 00:04:09.913 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:09.913 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.913 [185/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:09.913 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:09.913 [187/268] Linking static target lib/librte_mbuf.a 00:04:09.913 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:09.913 [189/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:09.913 [190/268] Linking static target lib/librte_reorder.a 00:04:09.913 [191/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:09.913 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:09.913 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:10.173 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.173 [195/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:10.173 [196/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:10.173 [197/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.173 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:10.173 [199/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:10.173 [200/268] Linking static target lib/librte_security.a 00:04:10.173 [201/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:10.173 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:10.173 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:10.173 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:10.173 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:10.173 [206/268] Linking static target drivers/librte_bus_vdev.a 00:04:10.173 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:10.173 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.432 [209/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.432 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:10.432 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:10.432 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:10.432 [213/268] Linking static target drivers/librte_bus_pci.a 00:04:10.432 [214/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.432 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:10.432 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:10.432 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:10.432 [218/268] Linking static target drivers/librte_mempool_ring.a 00:04:10.432 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.432 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.433 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.692 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:10.692 [223/268] Linking static target lib/librte_cryptodev.a 00:04:10.692 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:10.692 [225/268] Linking static target lib/librte_ethdev.a 00:04:10.950 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.887 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.790 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:15.692 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.692 [230/268] Linking target lib/librte_eal.so.24.1 00:04:15.692 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:15.951 [232/268] Linking target lib/librte_pci.so.24.1 00:04:15.951 [233/268] Linking target lib/librte_meter.so.24.1 00:04:15.951 [234/268] Linking target lib/librte_timer.so.24.1 00:04:15.951 [235/268] Linking target lib/librte_dmadev.so.24.1 00:04:15.951 [236/268] Linking target lib/librte_ring.so.24.1 00:04:15.951 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:15.951 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.951 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:15.951 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:15.951 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:15.951 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:16.209 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:16.209 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:16.209 [245/268] Linking target lib/librte_rcu.so.24.1 00:04:16.209 [246/268] Linking target lib/librte_mempool.so.24.1 00:04:16.209 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:16.209 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:16.468 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:16.468 [250/268] Linking target lib/librte_mbuf.so.24.1 00:04:16.468 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:16.727 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:16.727 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:16.727 [254/268] Linking target lib/librte_net.so.24.1 00:04:16.727 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:16.985 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:16.985 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:16.985 [258/268] Linking target lib/librte_cmdline.so.24.1 00:04:16.985 [259/268] Linking target lib/librte_hash.so.24.1 00:04:16.985 [260/268] Linking target lib/librte_ethdev.so.24.1 00:04:16.985 [261/268] Linking target lib/librte_security.so.24.1 00:04:16.985 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:17.244 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:17.244 [264/268] Linking target lib/librte_power.so.24.1 00:04:25.422 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:25.422 [266/268] Linking static target lib/librte_vhost.a 00:04:25.682 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.941 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:25.941 INFO: autodetecting backend as ninja 00:04:25.941 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:04:27.844 CC lib/ut/ut.o 00:04:27.844 CC lib/ut_mock/mock.o 00:04:27.844 CC lib/log/log_flags.o 00:04:27.845 CC lib/log/log.o 00:04:27.845 CC lib/log/log_deprecated.o 00:04:27.845 LIB libspdk_ut.a 00:04:27.845 LIB libspdk_log.a 00:04:27.845 SO libspdk_ut.so.2.0 00:04:27.845 LIB libspdk_ut_mock.a 00:04:27.845 SO libspdk_log.so.7.0 00:04:27.845 SO libspdk_ut_mock.so.6.0 00:04:27.845 SYMLINK libspdk_ut.so 00:04:28.103 SYMLINK libspdk_log.so 00:04:28.103 SYMLINK libspdk_ut_mock.so 00:04:28.103 CC lib/ioat/ioat.o 00:04:28.103 CC lib/dma/dma.o 00:04:28.103 CC lib/util/bit_array.o 00:04:28.103 CC lib/util/base64.o 00:04:28.103 CC lib/util/cpuset.o 00:04:28.103 CC lib/util/crc16.o 00:04:28.103 CC lib/util/crc32.o 00:04:28.103 CC lib/util/crc32c.o 00:04:28.103 CC lib/util/crc32_ieee.o 00:04:28.103 CC lib/util/crc64.o 00:04:28.103 CXX lib/trace_parser/trace.o 00:04:28.103 CC lib/util/dif.o 00:04:28.361 CC lib/util/fd.o 00:04:28.361 CC lib/util/fd_group.o 00:04:28.361 CC lib/util/file.o 00:04:28.361 CC lib/util/hexlify.o 00:04:28.361 CC lib/util/iov.o 00:04:28.361 CC lib/util/math.o 00:04:28.361 CC lib/util/pipe.o 00:04:28.361 CC lib/util/net.o 00:04:28.361 CC lib/util/strerror_tls.o 00:04:28.362 CC lib/util/string.o 00:04:28.362 CC lib/util/uuid.o 00:04:28.362 CC lib/util/xor.o 00:04:28.362 CC lib/util/zipf.o 00:04:28.362 CC lib/vfio_user/host/vfio_user_pci.o 00:04:28.362 CC lib/vfio_user/host/vfio_user.o 00:04:28.362 LIB libspdk_dma.a 00:04:28.620 SO libspdk_dma.so.4.0 00:04:28.620 LIB libspdk_ioat.a 00:04:28.620 SO libspdk_ioat.so.7.0 00:04:28.620 SYMLINK libspdk_dma.so 00:04:28.620 SYMLINK libspdk_ioat.so 00:04:28.620 LIB libspdk_vfio_user.a 00:04:28.879 SO libspdk_vfio_user.so.5.0 00:04:28.879 SYMLINK libspdk_vfio_user.so 00:04:28.879 LIB libspdk_util.a 00:04:29.138 SO libspdk_util.so.10.0 00:04:29.397 SYMLINK libspdk_util.so 00:04:29.655 CC lib/rdma_provider/common.o 00:04:29.655 CC lib/json/json_parse.o 00:04:29.655 CC lib/conf/conf.o 00:04:29.655 CC lib/idxd/idxd.o 00:04:29.655 CC lib/json/json_util.o 00:04:29.655 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:29.655 CC lib/idxd/idxd_user.o 00:04:29.655 CC lib/json/json_write.o 00:04:29.655 CC lib/idxd/idxd_kernel.o 00:04:29.655 CC lib/rdma_utils/rdma_utils.o 00:04:29.655 CC lib/env_dpdk/env.o 00:04:29.655 CC lib/env_dpdk/memory.o 00:04:29.655 LIB libspdk_trace_parser.a 00:04:29.655 CC lib/env_dpdk/pci.o 00:04:29.655 CC lib/env_dpdk/init.o 00:04:29.655 CC lib/env_dpdk/threads.o 00:04:29.655 CC lib/env_dpdk/pci_ioat.o 00:04:29.655 CC lib/env_dpdk/pci_virtio.o 00:04:29.655 CC lib/env_dpdk/pci_vmd.o 00:04:29.655 CC lib/env_dpdk/pci_idxd.o 00:04:29.655 CC lib/env_dpdk/pci_event.o 00:04:29.655 CC lib/env_dpdk/sigbus_handler.o 00:04:29.655 CC lib/env_dpdk/pci_dpdk.o 00:04:29.655 CC lib/vmd/vmd.o 00:04:29.655 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:29.655 CC lib/vmd/led.o 00:04:29.655 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:29.655 SO libspdk_trace_parser.so.5.0 00:04:29.655 SYMLINK libspdk_trace_parser.so 00:04:29.914 LIB libspdk_rdma_provider.a 00:04:29.914 SO libspdk_rdma_provider.so.6.0 00:04:29.914 LIB libspdk_conf.a 00:04:29.914 LIB libspdk_rdma_utils.a 00:04:29.914 SO libspdk_conf.so.6.0 00:04:29.914 SYMLINK libspdk_rdma_provider.so 00:04:29.914 SO libspdk_rdma_utils.so.1.0 00:04:29.914 SYMLINK libspdk_conf.so 00:04:29.914 SYMLINK libspdk_rdma_utils.so 00:04:29.914 LIB libspdk_json.a 00:04:30.172 SO libspdk_json.so.6.0 00:04:30.172 SYMLINK libspdk_json.so 00:04:30.172 LIB libspdk_idxd.a 00:04:30.172 SO libspdk_idxd.so.12.1 00:04:30.432 CC lib/jsonrpc/jsonrpc_server.o 00:04:30.432 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:30.432 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:30.432 CC lib/jsonrpc/jsonrpc_client.o 00:04:30.432 LIB libspdk_vmd.a 00:04:30.432 SYMLINK libspdk_idxd.so 00:04:30.432 SO libspdk_vmd.so.6.0 00:04:30.432 SYMLINK libspdk_vmd.so 00:04:30.692 LIB libspdk_jsonrpc.a 00:04:30.692 SO libspdk_jsonrpc.so.6.0 00:04:30.692 SYMLINK libspdk_jsonrpc.so 00:04:30.951 CC lib/rpc/rpc.o 00:04:31.210 LIB libspdk_rpc.a 00:04:31.210 SO libspdk_rpc.so.6.0 00:04:31.470 SYMLINK libspdk_rpc.so 00:04:31.470 CC lib/keyring/keyring.o 00:04:31.470 CC lib/keyring/keyring_rpc.o 00:04:31.470 CC lib/notify/notify.o 00:04:31.470 CC lib/notify/notify_rpc.o 00:04:31.470 CC lib/trace/trace.o 00:04:31.470 CC lib/trace/trace_flags.o 00:04:31.470 CC lib/trace/trace_rpc.o 00:04:31.730 LIB libspdk_notify.a 00:04:31.730 SO libspdk_notify.so.6.0 00:04:31.730 LIB libspdk_keyring.a 00:04:31.730 SYMLINK libspdk_notify.so 00:04:31.730 SO libspdk_keyring.so.1.0 00:04:31.988 SYMLINK libspdk_keyring.so 00:04:31.988 LIB libspdk_trace.a 00:04:31.988 SO libspdk_trace.so.10.0 00:04:31.988 SYMLINK libspdk_trace.so 00:04:32.247 CC lib/sock/sock.o 00:04:32.247 CC lib/sock/sock_rpc.o 00:04:32.247 CC lib/thread/thread.o 00:04:32.247 CC lib/thread/iobuf.o 00:04:32.815 LIB libspdk_env_dpdk.a 00:04:33.073 SO libspdk_env_dpdk.so.15.0 00:04:33.073 LIB libspdk_sock.a 00:04:33.073 SO libspdk_sock.so.10.0 00:04:33.073 SYMLINK libspdk_sock.so 00:04:33.332 SYMLINK libspdk_env_dpdk.so 00:04:33.332 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:33.332 CC lib/nvme/nvme_ctrlr.o 00:04:33.332 CC lib/nvme/nvme_fabric.o 00:04:33.332 CC lib/nvme/nvme_ns_cmd.o 00:04:33.332 CC lib/nvme/nvme_ns.o 00:04:33.332 CC lib/nvme/nvme_pcie_common.o 00:04:33.332 CC lib/nvme/nvme_pcie.o 00:04:33.332 CC lib/nvme/nvme_qpair.o 00:04:33.332 CC lib/nvme/nvme.o 00:04:33.332 CC lib/nvme/nvme_quirks.o 00:04:33.332 CC lib/nvme/nvme_transport.o 00:04:33.332 CC lib/nvme/nvme_discovery.o 00:04:33.332 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:33.332 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:33.332 CC lib/nvme/nvme_tcp.o 00:04:33.332 CC lib/nvme/nvme_opal.o 00:04:33.332 CC lib/nvme/nvme_io_msg.o 00:04:33.332 CC lib/nvme/nvme_poll_group.o 00:04:33.332 CC lib/nvme/nvme_zns.o 00:04:33.332 CC lib/nvme/nvme_stubs.o 00:04:33.332 CC lib/nvme/nvme_auth.o 00:04:33.332 CC lib/nvme/nvme_cuse.o 00:04:33.332 CC lib/nvme/nvme_vfio_user.o 00:04:33.332 CC lib/nvme/nvme_rdma.o 00:04:34.709 LIB libspdk_thread.a 00:04:34.709 SO libspdk_thread.so.10.1 00:04:34.709 SYMLINK libspdk_thread.so 00:04:34.709 CC lib/vfu_tgt/tgt_endpoint.o 00:04:34.709 CC lib/vfu_tgt/tgt_rpc.o 00:04:34.709 CC lib/blob/blobstore.o 00:04:34.709 CC lib/init/json_config.o 00:04:34.709 CC lib/accel/accel.o 00:04:34.709 CC lib/virtio/virtio.o 00:04:34.709 CC lib/init/subsystem.o 00:04:34.709 CC lib/accel/accel_rpc.o 00:04:34.709 CC lib/blob/request.o 00:04:34.709 CC lib/virtio/virtio_vhost_user.o 00:04:34.709 CC lib/init/subsystem_rpc.o 00:04:34.709 CC lib/accel/accel_sw.o 00:04:34.709 CC lib/blob/zeroes.o 00:04:34.709 CC lib/init/rpc.o 00:04:34.709 CC lib/virtio/virtio_vfio_user.o 00:04:34.709 CC lib/blob/blob_bs_dev.o 00:04:34.709 CC lib/virtio/virtio_pci.o 00:04:34.968 LIB libspdk_init.a 00:04:34.968 SO libspdk_init.so.5.0 00:04:35.226 LIB libspdk_vfu_tgt.a 00:04:35.226 LIB libspdk_virtio.a 00:04:35.226 SYMLINK libspdk_init.so 00:04:35.226 SO libspdk_vfu_tgt.so.3.0 00:04:35.226 SO libspdk_virtio.so.7.0 00:04:35.226 SYMLINK libspdk_vfu_tgt.so 00:04:35.226 SYMLINK libspdk_virtio.so 00:04:35.226 CC lib/event/app.o 00:04:35.226 CC lib/event/reactor.o 00:04:35.226 CC lib/event/log_rpc.o 00:04:35.226 CC lib/event/app_rpc.o 00:04:35.226 CC lib/event/scheduler_static.o 00:04:35.793 LIB libspdk_event.a 00:04:35.793 SO libspdk_event.so.14.0 00:04:35.793 SYMLINK libspdk_event.so 00:04:36.051 LIB libspdk_accel.a 00:04:36.051 SO libspdk_accel.so.16.0 00:04:36.051 SYMLINK libspdk_accel.so 00:04:36.051 LIB libspdk_nvme.a 00:04:36.310 CC lib/bdev/bdev.o 00:04:36.310 CC lib/bdev/bdev_rpc.o 00:04:36.310 CC lib/bdev/bdev_zone.o 00:04:36.310 CC lib/bdev/part.o 00:04:36.310 CC lib/bdev/scsi_nvme.o 00:04:36.310 SO libspdk_nvme.so.13.1 00:04:36.887 SYMLINK libspdk_nvme.so 00:04:39.451 LIB libspdk_blob.a 00:04:39.451 SO libspdk_blob.so.11.0 00:04:39.451 SYMLINK libspdk_blob.so 00:04:39.451 CC lib/blobfs/blobfs.o 00:04:39.451 CC lib/blobfs/tree.o 00:04:39.451 CC lib/lvol/lvol.o 00:04:40.386 LIB libspdk_bdev.a 00:04:40.644 SO libspdk_bdev.so.16.0 00:04:40.644 SYMLINK libspdk_bdev.so 00:04:40.644 LIB libspdk_blobfs.a 00:04:40.905 SO libspdk_blobfs.so.10.0 00:04:40.905 SYMLINK libspdk_blobfs.so 00:04:40.905 LIB libspdk_lvol.a 00:04:40.905 CC lib/ublk/ublk.o 00:04:40.905 CC lib/ublk/ublk_rpc.o 00:04:40.905 CC lib/nvmf/ctrlr_discovery.o 00:04:40.905 CC lib/nvmf/ctrlr.o 00:04:40.905 CC lib/nvmf/ctrlr_bdev.o 00:04:40.905 CC lib/nvmf/subsystem.o 00:04:40.905 CC lib/nvmf/nvmf.o 00:04:40.905 CC lib/nvmf/nvmf_rpc.o 00:04:40.905 CC lib/nvmf/transport.o 00:04:40.905 CC lib/scsi/dev.o 00:04:40.905 CC lib/nvmf/tcp.o 00:04:40.905 CC lib/scsi/lun.o 00:04:40.905 CC lib/nvmf/stubs.o 00:04:40.905 CC lib/scsi/port.o 00:04:40.905 CC lib/nvmf/mdns_server.o 00:04:40.905 CC lib/scsi/scsi.o 00:04:40.905 CC lib/ftl/ftl_core.o 00:04:40.905 CC lib/nvmf/vfio_user.o 00:04:40.905 CC lib/scsi/scsi_bdev.o 00:04:40.905 CC lib/ftl/ftl_init.o 00:04:40.905 CC lib/scsi/scsi_pr.o 00:04:40.905 CC lib/nvmf/rdma.o 00:04:40.905 CC lib/nvmf/auth.o 00:04:40.905 CC lib/ftl/ftl_layout.o 00:04:40.905 CC lib/scsi/scsi_rpc.o 00:04:40.905 CC lib/scsi/task.o 00:04:40.905 CC lib/ftl/ftl_debug.o 00:04:40.905 CC lib/ftl/ftl_io.o 00:04:40.905 CC lib/ftl/ftl_sb.o 00:04:40.905 SO libspdk_lvol.so.10.0 00:04:40.905 CC lib/nbd/nbd.o 00:04:40.905 CC lib/ftl/ftl_l2p.o 00:04:40.905 CC lib/nbd/nbd_rpc.o 00:04:40.905 CC lib/ftl/ftl_l2p_flat.o 00:04:40.905 CC lib/ftl/ftl_nv_cache.o 00:04:40.905 CC lib/ftl/ftl_band.o 00:04:40.905 CC lib/ftl/ftl_band_ops.o 00:04:40.905 CC lib/ftl/ftl_writer.o 00:04:40.905 CC lib/ftl/ftl_rq.o 00:04:40.905 CC lib/ftl/ftl_reloc.o 00:04:40.905 CC lib/ftl/ftl_l2p_cache.o 00:04:40.905 CC lib/ftl/ftl_p2l.o 00:04:40.905 CC lib/ftl/mngt/ftl_mngt.o 00:04:40.905 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:40.905 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:40.905 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:40.905 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:40.905 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:40.905 SYMLINK libspdk_lvol.so 00:04:41.170 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:41.170 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:41.170 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:41.439 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:41.439 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:41.439 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:41.439 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:41.439 CC lib/ftl/utils/ftl_conf.o 00:04:41.439 CC lib/ftl/utils/ftl_md.o 00:04:41.439 CC lib/ftl/utils/ftl_mempool.o 00:04:41.439 CC lib/ftl/utils/ftl_bitmap.o 00:04:41.439 CC lib/ftl/utils/ftl_property.o 00:04:41.439 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:41.439 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:41.439 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:41.439 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:41.439 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:41.439 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:41.439 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:41.439 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:41.439 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:41.699 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:41.699 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:41.699 CC lib/ftl/base/ftl_base_dev.o 00:04:41.699 CC lib/ftl/base/ftl_base_bdev.o 00:04:41.699 CC lib/ftl/ftl_trace.o 00:04:41.699 LIB libspdk_nbd.a 00:04:41.699 SO libspdk_nbd.so.7.0 00:04:41.957 LIB libspdk_scsi.a 00:04:41.957 SYMLINK libspdk_nbd.so 00:04:41.957 SO libspdk_scsi.so.9.0 00:04:41.957 LIB libspdk_ublk.a 00:04:41.957 SO libspdk_ublk.so.3.0 00:04:41.957 SYMLINK libspdk_scsi.so 00:04:42.216 SYMLINK libspdk_ublk.so 00:04:42.216 CC lib/iscsi/conn.o 00:04:42.216 CC lib/vhost/vhost.o 00:04:42.216 CC lib/vhost/vhost_rpc.o 00:04:42.216 CC lib/iscsi/init_grp.o 00:04:42.216 CC lib/vhost/vhost_scsi.o 00:04:42.216 CC lib/iscsi/iscsi.o 00:04:42.216 CC lib/iscsi/md5.o 00:04:42.216 CC lib/vhost/vhost_blk.o 00:04:42.216 CC lib/iscsi/param.o 00:04:42.216 CC lib/vhost/rte_vhost_user.o 00:04:42.216 CC lib/iscsi/portal_grp.o 00:04:42.216 CC lib/iscsi/iscsi_subsystem.o 00:04:42.216 CC lib/iscsi/tgt_node.o 00:04:42.216 CC lib/iscsi/iscsi_rpc.o 00:04:42.216 CC lib/iscsi/task.o 00:04:42.475 LIB libspdk_ftl.a 00:04:42.732 SO libspdk_ftl.so.9.0 00:04:42.991 SYMLINK libspdk_ftl.so 00:04:43.927 LIB libspdk_vhost.a 00:04:43.927 SO libspdk_vhost.so.8.0 00:04:43.927 LIB libspdk_nvmf.a 00:04:43.927 SYMLINK libspdk_vhost.so 00:04:44.186 LIB libspdk_iscsi.a 00:04:44.186 SO libspdk_nvmf.so.19.0 00:04:44.186 SO libspdk_iscsi.so.8.0 00:04:44.445 SYMLINK libspdk_iscsi.so 00:04:44.445 SYMLINK libspdk_nvmf.so 00:04:45.013 CC module/env_dpdk/env_dpdk_rpc.o 00:04:45.013 CC module/vfu_device/vfu_virtio.o 00:04:45.013 CC module/vfu_device/vfu_virtio_blk.o 00:04:45.013 CC module/vfu_device/vfu_virtio_scsi.o 00:04:45.013 CC module/vfu_device/vfu_virtio_rpc.o 00:04:45.013 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:45.013 CC module/keyring/linux/keyring.o 00:04:45.013 CC module/keyring/linux/keyring_rpc.o 00:04:45.013 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:45.013 CC module/keyring/file/keyring.o 00:04:45.013 CC module/keyring/file/keyring_rpc.o 00:04:45.013 CC module/scheduler/gscheduler/gscheduler.o 00:04:45.013 CC module/accel/ioat/accel_ioat.o 00:04:45.013 CC module/blob/bdev/blob_bdev.o 00:04:45.013 CC module/sock/posix/posix.o 00:04:45.013 CC module/accel/ioat/accel_ioat_rpc.o 00:04:45.013 CC module/accel/dsa/accel_dsa.o 00:04:45.013 CC module/accel/dsa/accel_dsa_rpc.o 00:04:45.013 CC module/accel/iaa/accel_iaa.o 00:04:45.013 CC module/accel/iaa/accel_iaa_rpc.o 00:04:45.013 CC module/accel/error/accel_error.o 00:04:45.013 CC module/accel/error/accel_error_rpc.o 00:04:45.013 LIB libspdk_env_dpdk_rpc.a 00:04:45.013 SO libspdk_env_dpdk_rpc.so.6.0 00:04:45.013 SYMLINK libspdk_env_dpdk_rpc.so 00:04:45.013 LIB libspdk_keyring_linux.a 00:04:45.013 LIB libspdk_keyring_file.a 00:04:45.013 LIB libspdk_scheduler_gscheduler.a 00:04:45.013 LIB libspdk_scheduler_dpdk_governor.a 00:04:45.013 SO libspdk_keyring_file.so.1.0 00:04:45.013 SO libspdk_keyring_linux.so.1.0 00:04:45.272 SO libspdk_scheduler_gscheduler.so.4.0 00:04:45.272 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:45.272 LIB libspdk_accel_error.a 00:04:45.272 LIB libspdk_accel_ioat.a 00:04:45.272 LIB libspdk_scheduler_dynamic.a 00:04:45.272 SYMLINK libspdk_keyring_file.so 00:04:45.272 SYMLINK libspdk_keyring_linux.so 00:04:45.272 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:45.272 SYMLINK libspdk_scheduler_gscheduler.so 00:04:45.272 SO libspdk_accel_ioat.so.6.0 00:04:45.272 SO libspdk_scheduler_dynamic.so.4.0 00:04:45.272 LIB libspdk_accel_iaa.a 00:04:45.272 SO libspdk_accel_error.so.2.0 00:04:45.272 SO libspdk_accel_iaa.so.3.0 00:04:45.272 LIB libspdk_accel_dsa.a 00:04:45.272 SYMLINK libspdk_accel_error.so 00:04:45.272 SYMLINK libspdk_scheduler_dynamic.so 00:04:45.272 SO libspdk_accel_dsa.so.5.0 00:04:45.272 SYMLINK libspdk_accel_ioat.so 00:04:45.272 LIB libspdk_blob_bdev.a 00:04:45.272 SYMLINK libspdk_accel_iaa.so 00:04:45.272 SO libspdk_blob_bdev.so.11.0 00:04:45.272 SYMLINK libspdk_accel_dsa.so 00:04:45.530 SYMLINK libspdk_blob_bdev.so 00:04:45.530 LIB libspdk_vfu_device.a 00:04:45.530 SO libspdk_vfu_device.so.3.0 00:04:45.789 SYMLINK libspdk_vfu_device.so 00:04:45.789 CC module/bdev/gpt/gpt.o 00:04:45.789 CC module/bdev/aio/bdev_aio.o 00:04:45.789 CC module/bdev/gpt/vbdev_gpt.o 00:04:45.789 CC module/bdev/aio/bdev_aio_rpc.o 00:04:45.789 CC module/bdev/error/vbdev_error.o 00:04:45.789 CC module/bdev/null/bdev_null.o 00:04:45.789 CC module/bdev/null/bdev_null_rpc.o 00:04:45.789 CC module/bdev/error/vbdev_error_rpc.o 00:04:45.789 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.789 CC module/bdev/lvol/vbdev_lvol.o 00:04:45.789 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:45.789 CC module/bdev/delay/vbdev_delay.o 00:04:45.789 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:45.789 CC module/bdev/split/vbdev_split.o 00:04:45.789 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:45.789 CC module/bdev/split/vbdev_split_rpc.o 00:04:45.789 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:45.789 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:45.789 CC module/bdev/malloc/bdev_malloc.o 00:04:45.789 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:45.789 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:45.789 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.789 CC module/bdev/nvme/bdev_nvme.o 00:04:45.789 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:45.789 CC module/bdev/iscsi/bdev_iscsi.o 00:04:45.789 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.789 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:45.789 CC module/bdev/nvme/nvme_rpc.o 00:04:45.789 CC module/bdev/nvme/bdev_mdns_client.o 00:04:45.789 CC module/bdev/ftl/bdev_ftl.o 00:04:45.789 CC module/bdev/nvme/vbdev_opal.o 00:04:45.789 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:45.789 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:45.789 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:45.789 CC module/blobfs/bdev/blobfs_bdev.o 00:04:45.789 CC module/bdev/raid/bdev_raid.o 00:04:45.789 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:45.789 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.789 CC module/bdev/raid/bdev_raid_sb.o 00:04:45.789 CC module/bdev/raid/raid0.o 00:04:45.789 CC module/bdev/raid/raid1.o 00:04:45.789 CC module/bdev/raid/concat.o 00:04:46.047 LIB libspdk_sock_posix.a 00:04:46.047 SO libspdk_sock_posix.so.6.0 00:04:46.047 LIB libspdk_blobfs_bdev.a 00:04:46.047 SO libspdk_blobfs_bdev.so.6.0 00:04:46.047 LIB libspdk_bdev_split.a 00:04:46.305 SYMLINK libspdk_sock_posix.so 00:04:46.305 SO libspdk_bdev_split.so.6.0 00:04:46.305 LIB libspdk_bdev_gpt.a 00:04:46.305 SYMLINK libspdk_blobfs_bdev.so 00:04:46.305 LIB libspdk_bdev_null.a 00:04:46.305 LIB libspdk_bdev_error.a 00:04:46.305 LIB libspdk_bdev_passthru.a 00:04:46.305 SO libspdk_bdev_gpt.so.6.0 00:04:46.305 SO libspdk_bdev_null.so.6.0 00:04:46.305 LIB libspdk_bdev_ftl.a 00:04:46.305 SYMLINK libspdk_bdev_split.so 00:04:46.305 SO libspdk_bdev_passthru.so.6.0 00:04:46.305 SO libspdk_bdev_error.so.6.0 00:04:46.305 SO libspdk_bdev_ftl.so.6.0 00:04:46.305 LIB libspdk_bdev_malloc.a 00:04:46.305 SYMLINK libspdk_bdev_gpt.so 00:04:46.306 LIB libspdk_bdev_zone_block.a 00:04:46.306 LIB libspdk_bdev_aio.a 00:04:46.306 SYMLINK libspdk_bdev_null.so 00:04:46.306 SO libspdk_bdev_malloc.so.6.0 00:04:46.306 SYMLINK libspdk_bdev_passthru.so 00:04:46.306 SO libspdk_bdev_aio.so.6.0 00:04:46.306 SYMLINK libspdk_bdev_error.so 00:04:46.306 SO libspdk_bdev_zone_block.so.6.0 00:04:46.306 SYMLINK libspdk_bdev_ftl.so 00:04:46.306 LIB libspdk_bdev_iscsi.a 00:04:46.306 LIB libspdk_bdev_virtio.a 00:04:46.306 LIB libspdk_bdev_delay.a 00:04:46.306 SO libspdk_bdev_iscsi.so.6.0 00:04:46.306 SYMLINK libspdk_bdev_malloc.so 00:04:46.306 SYMLINK libspdk_bdev_aio.so 00:04:46.306 SYMLINK libspdk_bdev_zone_block.so 00:04:46.306 LIB libspdk_bdev_lvol.a 00:04:46.306 SO libspdk_bdev_delay.so.6.0 00:04:46.306 SO libspdk_bdev_virtio.so.6.0 00:04:46.563 SYMLINK libspdk_bdev_iscsi.so 00:04:46.563 SO libspdk_bdev_lvol.so.6.0 00:04:46.563 SYMLINK libspdk_bdev_delay.so 00:04:46.563 SYMLINK libspdk_bdev_virtio.so 00:04:46.563 SYMLINK libspdk_bdev_lvol.so 00:04:47.497 LIB libspdk_bdev_raid.a 00:04:47.497 SO libspdk_bdev_raid.so.6.0 00:04:47.755 SYMLINK libspdk_bdev_raid.so 00:04:50.287 LIB libspdk_bdev_nvme.a 00:04:50.287 SO libspdk_bdev_nvme.so.7.0 00:04:50.546 SYMLINK libspdk_bdev_nvme.so 00:04:50.805 CC module/event/subsystems/iobuf/iobuf.o 00:04:50.805 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:50.805 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:50.805 CC module/event/subsystems/keyring/keyring.o 00:04:50.805 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:50.805 CC module/event/subsystems/scheduler/scheduler.o 00:04:50.805 CC module/event/subsystems/vmd/vmd.o 00:04:50.805 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:50.805 CC module/event/subsystems/sock/sock.o 00:04:51.064 LIB libspdk_event_keyring.a 00:04:51.064 LIB libspdk_event_vfu_tgt.a 00:04:51.064 LIB libspdk_event_iobuf.a 00:04:51.064 SO libspdk_event_keyring.so.1.0 00:04:51.064 SO libspdk_event_vfu_tgt.so.3.0 00:04:51.064 SO libspdk_event_iobuf.so.3.0 00:04:51.064 LIB libspdk_event_vhost_blk.a 00:04:51.064 LIB libspdk_event_sock.a 00:04:51.064 LIB libspdk_event_scheduler.a 00:04:51.064 LIB libspdk_event_vmd.a 00:04:51.064 SYMLINK libspdk_event_keyring.so 00:04:51.064 SYMLINK libspdk_event_vfu_tgt.so 00:04:51.064 SO libspdk_event_vhost_blk.so.3.0 00:04:51.064 SO libspdk_event_sock.so.5.0 00:04:51.064 SO libspdk_event_scheduler.so.4.0 00:04:51.064 SO libspdk_event_vmd.so.6.0 00:04:51.064 SYMLINK libspdk_event_iobuf.so 00:04:51.064 SYMLINK libspdk_event_vhost_blk.so 00:04:51.064 SYMLINK libspdk_event_scheduler.so 00:04:51.064 SYMLINK libspdk_event_sock.so 00:04:51.323 SYMLINK libspdk_event_vmd.so 00:04:51.323 CC module/event/subsystems/accel/accel.o 00:04:51.581 LIB libspdk_event_accel.a 00:04:51.840 SO libspdk_event_accel.so.6.0 00:04:51.840 SYMLINK libspdk_event_accel.so 00:04:52.099 CC module/event/subsystems/bdev/bdev.o 00:04:52.358 LIB libspdk_event_bdev.a 00:04:52.358 SO libspdk_event_bdev.so.6.0 00:04:52.358 SYMLINK libspdk_event_bdev.so 00:04:52.616 CC module/event/subsystems/nbd/nbd.o 00:04:52.616 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:52.616 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.616 CC module/event/subsystems/ublk/ublk.o 00:04:52.616 CC module/event/subsystems/scsi/scsi.o 00:04:52.874 LIB libspdk_event_ublk.a 00:04:52.874 LIB libspdk_event_nbd.a 00:04:52.874 LIB libspdk_event_scsi.a 00:04:52.874 SO libspdk_event_ublk.so.3.0 00:04:52.874 SO libspdk_event_nbd.so.6.0 00:04:52.874 SO libspdk_event_scsi.so.6.0 00:04:52.874 SYMLINK libspdk_event_ublk.so 00:04:52.874 LIB libspdk_event_nvmf.a 00:04:52.874 SYMLINK libspdk_event_nbd.so 00:04:52.874 SYMLINK libspdk_event_scsi.so 00:04:52.874 SO libspdk_event_nvmf.so.6.0 00:04:53.138 SYMLINK libspdk_event_nvmf.so 00:04:53.138 CC module/event/subsystems/iscsi/iscsi.o 00:04:53.138 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:53.425 LIB libspdk_event_iscsi.a 00:04:53.425 LIB libspdk_event_vhost_scsi.a 00:04:53.425 SO libspdk_event_iscsi.so.6.0 00:04:53.425 SO libspdk_event_vhost_scsi.so.3.0 00:04:53.425 SYMLINK libspdk_event_iscsi.so 00:04:53.425 SYMLINK libspdk_event_vhost_scsi.so 00:04:53.695 SO libspdk.so.6.0 00:04:53.695 SYMLINK libspdk.so 00:04:53.956 CC app/trace_record/trace_record.o 00:04:53.956 CC app/spdk_lspci/spdk_lspci.o 00:04:53.956 CC app/spdk_nvme_identify/identify.o 00:04:53.956 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.956 CXX app/trace/trace.o 00:04:53.956 CC test/rpc_client/rpc_client_test.o 00:04:53.956 TEST_HEADER include/spdk/accel.h 00:04:53.956 TEST_HEADER include/spdk/accel_module.h 00:04:53.956 TEST_HEADER include/spdk/assert.h 00:04:53.956 TEST_HEADER include/spdk/barrier.h 00:04:53.956 CC app/spdk_top/spdk_top.o 00:04:53.956 TEST_HEADER include/spdk/base64.h 00:04:53.956 TEST_HEADER include/spdk/bdev_module.h 00:04:53.956 TEST_HEADER include/spdk/bdev.h 00:04:53.956 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.956 CC app/spdk_nvme_perf/perf.o 00:04:53.956 TEST_HEADER include/spdk/bit_array.h 00:04:53.956 TEST_HEADER include/spdk/bit_pool.h 00:04:53.956 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.956 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.956 TEST_HEADER include/spdk/blobfs.h 00:04:53.956 TEST_HEADER include/spdk/blob.h 00:04:53.956 TEST_HEADER include/spdk/config.h 00:04:53.956 TEST_HEADER include/spdk/conf.h 00:04:53.956 TEST_HEADER include/spdk/cpuset.h 00:04:53.956 TEST_HEADER include/spdk/crc16.h 00:04:53.956 TEST_HEADER include/spdk/crc32.h 00:04:53.956 TEST_HEADER include/spdk/crc64.h 00:04:53.956 TEST_HEADER include/spdk/dma.h 00:04:53.956 TEST_HEADER include/spdk/dif.h 00:04:53.956 TEST_HEADER include/spdk/endian.h 00:04:53.956 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.956 TEST_HEADER include/spdk/env.h 00:04:53.956 TEST_HEADER include/spdk/event.h 00:04:53.956 TEST_HEADER include/spdk/fd_group.h 00:04:53.956 TEST_HEADER include/spdk/fd.h 00:04:53.956 TEST_HEADER include/spdk/file.h 00:04:53.956 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.956 TEST_HEADER include/spdk/ftl.h 00:04:53.956 TEST_HEADER include/spdk/hexlify.h 00:04:53.956 TEST_HEADER include/spdk/histogram_data.h 00:04:53.956 TEST_HEADER include/spdk/idxd.h 00:04:53.956 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.956 TEST_HEADER include/spdk/init.h 00:04:53.956 TEST_HEADER include/spdk/ioat.h 00:04:53.956 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.956 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.956 TEST_HEADER include/spdk/json.h 00:04:53.956 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.956 TEST_HEADER include/spdk/keyring_module.h 00:04:53.956 TEST_HEADER include/spdk/keyring.h 00:04:53.956 TEST_HEADER include/spdk/likely.h 00:04:53.956 TEST_HEADER include/spdk/log.h 00:04:53.956 TEST_HEADER include/spdk/lvol.h 00:04:53.956 TEST_HEADER include/spdk/memory.h 00:04:53.956 TEST_HEADER include/spdk/nbd.h 00:04:53.956 TEST_HEADER include/spdk/mmio.h 00:04:53.956 TEST_HEADER include/spdk/net.h 00:04:53.956 TEST_HEADER include/spdk/notify.h 00:04:53.956 TEST_HEADER include/spdk/nvme.h 00:04:53.956 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.956 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.956 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.956 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.956 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.956 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.956 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.956 TEST_HEADER include/spdk/nvmf.h 00:04:53.956 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.956 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.956 TEST_HEADER include/spdk/opal.h 00:04:53.956 TEST_HEADER include/spdk/opal_spec.h 00:04:53.956 TEST_HEADER include/spdk/pci_ids.h 00:04:53.956 TEST_HEADER include/spdk/pipe.h 00:04:53.956 TEST_HEADER include/spdk/queue.h 00:04:53.956 TEST_HEADER include/spdk/reduce.h 00:04:53.956 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.956 TEST_HEADER include/spdk/scheduler.h 00:04:53.956 TEST_HEADER include/spdk/rpc.h 00:04:53.956 TEST_HEADER include/spdk/scsi.h 00:04:53.956 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.956 TEST_HEADER include/spdk/sock.h 00:04:53.956 TEST_HEADER include/spdk/stdinc.h 00:04:53.956 TEST_HEADER include/spdk/string.h 00:04:53.956 TEST_HEADER include/spdk/thread.h 00:04:53.956 TEST_HEADER include/spdk/trace.h 00:04:53.956 TEST_HEADER include/spdk/trace_parser.h 00:04:53.956 TEST_HEADER include/spdk/tree.h 00:04:53.956 TEST_HEADER include/spdk/ublk.h 00:04:53.956 TEST_HEADER include/spdk/util.h 00:04:53.956 TEST_HEADER include/spdk/uuid.h 00:04:53.956 TEST_HEADER include/spdk/version.h 00:04:53.956 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.956 TEST_HEADER include/spdk/vhost.h 00:04:53.956 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.956 TEST_HEADER include/spdk/vmd.h 00:04:53.956 TEST_HEADER include/spdk/xor.h 00:04:53.956 TEST_HEADER include/spdk/zipf.h 00:04:53.956 CXX test/cpp_headers/accel.o 00:04:53.956 CXX test/cpp_headers/accel_module.o 00:04:53.956 CXX test/cpp_headers/assert.o 00:04:53.956 CXX test/cpp_headers/barrier.o 00:04:53.956 CXX test/cpp_headers/base64.o 00:04:53.956 CXX test/cpp_headers/bdev.o 00:04:53.956 CXX test/cpp_headers/bdev_module.o 00:04:53.956 CXX test/cpp_headers/bdev_zone.o 00:04:53.956 CXX test/cpp_headers/bit_array.o 00:04:53.956 CXX test/cpp_headers/bit_pool.o 00:04:53.956 CC app/spdk_dd/spdk_dd.o 00:04:53.956 CXX test/cpp_headers/blob_bdev.o 00:04:53.956 CXX test/cpp_headers/blobfs_bdev.o 00:04:53.956 CXX test/cpp_headers/blobfs.o 00:04:53.956 CXX test/cpp_headers/blob.o 00:04:53.956 CXX test/cpp_headers/conf.o 00:04:53.956 CXX test/cpp_headers/config.o 00:04:53.956 CXX test/cpp_headers/cpuset.o 00:04:53.956 CC app/iscsi_tgt/iscsi_tgt.o 00:04:53.956 CXX test/cpp_headers/crc16.o 00:04:53.956 CC app/nvmf_tgt/nvmf_main.o 00:04:53.956 CC app/spdk_tgt/spdk_tgt.o 00:04:53.956 CXX test/cpp_headers/crc32.o 00:04:53.956 CC examples/ioat/verify/verify.o 00:04:53.956 CC test/app/jsoncat/jsoncat.o 00:04:53.956 CC test/app/histogram_perf/histogram_perf.o 00:04:53.956 CC examples/util/zipf/zipf.o 00:04:53.956 CC test/thread/poller_perf/poller_perf.o 00:04:53.956 CC examples/ioat/perf/perf.o 00:04:53.956 CC test/app/stub/stub.o 00:04:53.956 CC test/env/vtophys/vtophys.o 00:04:53.956 CC app/fio/nvme/fio_plugin.o 00:04:53.956 CC test/env/pci/pci_ut.o 00:04:53.956 CC test/env/memory/memory_ut.o 00:04:53.956 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:54.223 CC test/dma/test_dma/test_dma.o 00:04:54.223 CC test/app/bdev_svc/bdev_svc.o 00:04:54.223 CC app/fio/bdev/fio_plugin.o 00:04:54.223 LINK spdk_lspci 00:04:54.223 CC test/env/mem_callbacks/mem_callbacks.o 00:04:54.223 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:54.224 LINK rpc_client_test 00:04:54.483 LINK spdk_nvme_discover 00:04:54.483 LINK interrupt_tgt 00:04:54.483 LINK histogram_perf 00:04:54.483 LINK jsoncat 00:04:54.483 LINK poller_perf 00:04:54.483 LINK vtophys 00:04:54.483 LINK zipf 00:04:54.483 CXX test/cpp_headers/crc64.o 00:04:54.483 LINK nvmf_tgt 00:04:54.483 CXX test/cpp_headers/dif.o 00:04:54.483 CXX test/cpp_headers/dma.o 00:04:54.483 CXX test/cpp_headers/endian.o 00:04:54.484 LINK spdk_trace_record 00:04:54.484 CXX test/cpp_headers/env_dpdk.o 00:04:54.484 CXX test/cpp_headers/env.o 00:04:54.484 CXX test/cpp_headers/event.o 00:04:54.484 CXX test/cpp_headers/fd_group.o 00:04:54.484 CXX test/cpp_headers/fd.o 00:04:54.484 CXX test/cpp_headers/file.o 00:04:54.484 LINK stub 00:04:54.484 CXX test/cpp_headers/ftl.o 00:04:54.484 LINK iscsi_tgt 00:04:54.484 CXX test/cpp_headers/gpt_spec.o 00:04:54.484 LINK env_dpdk_post_init 00:04:54.484 CXX test/cpp_headers/hexlify.o 00:04:54.484 CXX test/cpp_headers/histogram_data.o 00:04:54.484 CXX test/cpp_headers/idxd.o 00:04:54.484 LINK verify 00:04:54.484 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:54.484 CXX test/cpp_headers/idxd_spec.o 00:04:54.484 LINK spdk_tgt 00:04:54.484 LINK bdev_svc 00:04:54.484 LINK ioat_perf 00:04:54.484 CXX test/cpp_headers/init.o 00:04:54.754 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:54.754 CXX test/cpp_headers/ioat.o 00:04:54.754 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:54.754 CXX test/cpp_headers/ioat_spec.o 00:04:54.754 CXX test/cpp_headers/iscsi_spec.o 00:04:54.754 CXX test/cpp_headers/json.o 00:04:54.754 CXX test/cpp_headers/jsonrpc.o 00:04:54.754 CXX test/cpp_headers/keyring.o 00:04:54.754 LINK spdk_trace 00:04:54.754 CXX test/cpp_headers/keyring_module.o 00:04:54.754 CXX test/cpp_headers/likely.o 00:04:54.754 LINK spdk_dd 00:04:54.754 LINK pci_ut 00:04:54.754 CXX test/cpp_headers/log.o 00:04:54.754 CXX test/cpp_headers/lvol.o 00:04:54.754 CXX test/cpp_headers/memory.o 00:04:54.754 CXX test/cpp_headers/mmio.o 00:04:54.754 CXX test/cpp_headers/nbd.o 00:04:55.016 CXX test/cpp_headers/net.o 00:04:55.017 CXX test/cpp_headers/notify.o 00:04:55.017 CXX test/cpp_headers/nvme.o 00:04:55.017 CXX test/cpp_headers/nvme_intel.o 00:04:55.017 CXX test/cpp_headers/nvme_ocssd.o 00:04:55.017 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:55.017 CXX test/cpp_headers/nvme_spec.o 00:04:55.017 CXX test/cpp_headers/nvme_zns.o 00:04:55.017 LINK test_dma 00:04:55.017 CXX test/cpp_headers/nvmf_cmd.o 00:04:55.017 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:55.017 CXX test/cpp_headers/nvmf.o 00:04:55.017 CXX test/cpp_headers/nvmf_spec.o 00:04:55.017 CXX test/cpp_headers/nvmf_transport.o 00:04:55.017 CXX test/cpp_headers/opal.o 00:04:55.017 CXX test/cpp_headers/opal_spec.o 00:04:55.017 CXX test/cpp_headers/pci_ids.o 00:04:55.279 CXX test/cpp_headers/pipe.o 00:04:55.279 CC test/event/event_perf/event_perf.o 00:04:55.279 CC test/event/reactor/reactor.o 00:04:55.279 LINK nvme_fuzz 00:04:55.279 CXX test/cpp_headers/queue.o 00:04:55.279 CC examples/sock/hello_world/hello_sock.o 00:04:55.279 CXX test/cpp_headers/reduce.o 00:04:55.279 CC examples/thread/thread/thread_ex.o 00:04:55.279 CC test/event/reactor_perf/reactor_perf.o 00:04:55.279 LINK spdk_bdev 00:04:55.279 LINK spdk_nvme 00:04:55.279 CC examples/idxd/perf/perf.o 00:04:55.279 CXX test/cpp_headers/rpc.o 00:04:55.279 CXX test/cpp_headers/scheduler.o 00:04:55.279 CC examples/vmd/led/led.o 00:04:55.279 CXX test/cpp_headers/scsi.o 00:04:55.279 CC test/event/app_repeat/app_repeat.o 00:04:55.279 CC examples/vmd/lsvmd/lsvmd.o 00:04:55.279 CXX test/cpp_headers/scsi_spec.o 00:04:55.279 CXX test/cpp_headers/sock.o 00:04:55.279 CXX test/cpp_headers/stdinc.o 00:04:55.279 CXX test/cpp_headers/string.o 00:04:55.279 CXX test/cpp_headers/thread.o 00:04:55.279 CXX test/cpp_headers/trace.o 00:04:55.279 CXX test/cpp_headers/trace_parser.o 00:04:55.279 CC test/event/scheduler/scheduler.o 00:04:55.279 CXX test/cpp_headers/tree.o 00:04:55.279 CXX test/cpp_headers/ublk.o 00:04:55.279 CXX test/cpp_headers/util.o 00:04:55.279 CXX test/cpp_headers/uuid.o 00:04:55.279 CXX test/cpp_headers/version.o 00:04:55.542 CXX test/cpp_headers/vfio_user_pci.o 00:04:55.542 CXX test/cpp_headers/vfio_user_spec.o 00:04:55.542 LINK spdk_nvme_perf 00:04:55.542 CXX test/cpp_headers/vhost.o 00:04:55.542 CXX test/cpp_headers/vmd.o 00:04:55.542 CXX test/cpp_headers/xor.o 00:04:55.542 CXX test/cpp_headers/zipf.o 00:04:55.542 LINK reactor 00:04:55.542 LINK vhost_fuzz 00:04:55.542 CC app/vhost/vhost.o 00:04:55.542 LINK event_perf 00:04:55.542 LINK spdk_nvme_identify 00:04:55.542 LINK reactor_perf 00:04:55.542 LINK mem_callbacks 00:04:55.542 LINK led 00:04:55.542 LINK spdk_top 00:04:55.542 LINK lsvmd 00:04:55.542 LINK app_repeat 00:04:55.806 CC test/nvme/aer/aer.o 00:04:55.806 CC test/nvme/e2edp/nvme_dp.o 00:04:55.806 LINK hello_sock 00:04:55.806 CC test/nvme/startup/startup.o 00:04:55.806 CC test/nvme/reserve/reserve.o 00:04:55.806 CC test/nvme/reset/reset.o 00:04:55.806 CC test/nvme/sgl/sgl.o 00:04:55.806 CC test/nvme/err_injection/err_injection.o 00:04:55.806 CC test/nvme/overhead/overhead.o 00:04:55.806 LINK thread 00:04:55.806 CC test/accel/dif/dif.o 00:04:55.806 CC test/blobfs/mkfs/mkfs.o 00:04:55.806 CC test/nvme/simple_copy/simple_copy.o 00:04:55.806 CC test/nvme/connect_stress/connect_stress.o 00:04:55.806 LINK scheduler 00:04:55.806 CC test/nvme/compliance/nvme_compliance.o 00:04:55.806 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.806 CC test/lvol/esnap/esnap.o 00:04:55.806 CC test/nvme/boot_partition/boot_partition.o 00:04:55.806 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:55.806 CC test/nvme/cuse/cuse.o 00:04:55.806 LINK vhost 00:04:55.806 CC test/nvme/fdp/fdp.o 00:04:55.806 LINK idxd_perf 00:04:56.065 LINK startup 00:04:56.065 LINK err_injection 00:04:56.065 LINK reserve 00:04:56.065 LINK boot_partition 00:04:56.065 LINK nvme_dp 00:04:56.065 LINK simple_copy 00:04:56.065 LINK fused_ordering 00:04:56.065 LINK doorbell_aers 00:04:56.065 LINK overhead 00:04:56.065 LINK mkfs 00:04:56.065 LINK aer 00:04:56.323 CC examples/nvme/arbitration/arbitration.o 00:04:56.323 CC examples/nvme/hello_world/hello_world.o 00:04:56.323 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:56.323 CC examples/nvme/reconnect/reconnect.o 00:04:56.323 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:56.323 LINK connect_stress 00:04:56.323 CC examples/nvme/abort/abort.o 00:04:56.323 CC examples/nvme/hotplug/hotplug.o 00:04:56.323 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:56.323 LINK memory_ut 00:04:56.323 LINK sgl 00:04:56.323 LINK reset 00:04:56.323 LINK nvme_compliance 00:04:56.323 CC examples/accel/perf/accel_perf.o 00:04:56.323 CC examples/blob/cli/blobcli.o 00:04:56.323 CC examples/blob/hello_world/hello_blob.o 00:04:56.323 LINK fdp 00:04:56.581 LINK pmr_persistence 00:04:56.581 LINK cmb_copy 00:04:56.581 LINK hotplug 00:04:56.581 LINK hello_world 00:04:56.581 LINK dif 00:04:56.581 LINK arbitration 00:04:56.581 LINK reconnect 00:04:56.839 LINK abort 00:04:56.839 LINK nvme_manage 00:04:56.839 LINK hello_blob 00:04:56.839 LINK accel_perf 00:04:57.098 LINK blobcli 00:04:57.098 CC test/bdev/bdevio/bdevio.o 00:04:57.098 LINK iscsi_fuzz 00:04:57.357 CC examples/bdev/bdevperf/bdevperf.o 00:04:57.357 CC examples/bdev/hello_world/hello_bdev.o 00:04:57.926 LINK hello_bdev 00:04:57.926 LINK bdevio 00:04:57.926 LINK cuse 00:04:58.185 LINK bdevperf 00:04:58.753 CC examples/nvmf/nvmf/nvmf.o 00:04:59.321 LINK nvmf 00:05:04.597 LINK esnap 00:05:04.854 00:05:04.854 real 1m9.777s 00:05:04.854 user 11m14.532s 00:05:04.854 sys 2m41.955s 00:05:04.854 13:59:21 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:04.854 13:59:21 make -- common/autotest_common.sh@10 -- $ set +x 00:05:04.854 ************************************ 00:05:04.854 END TEST make 00:05:04.854 ************************************ 00:05:04.854 13:59:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:04.854 13:59:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:04.854 13:59:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:04.854 13:59:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.854 13:59:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:04.854 13:59:21 -- pm/common@44 -- $ pid=2305894 00:05:04.854 13:59:21 -- pm/common@50 -- $ kill -TERM 2305894 00:05:04.854 13:59:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.854 13:59:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:04.854 13:59:21 -- pm/common@44 -- $ pid=2305896 00:05:04.855 13:59:21 -- pm/common@50 -- $ kill -TERM 2305896 00:05:04.855 13:59:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.855 13:59:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:04.855 13:59:21 -- pm/common@44 -- $ pid=2305898 00:05:04.855 13:59:21 -- pm/common@50 -- $ kill -TERM 2305898 00:05:04.855 13:59:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.855 13:59:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:04.855 13:59:21 -- pm/common@44 -- $ pid=2305925 00:05:04.855 13:59:21 -- pm/common@50 -- $ sudo -E kill -TERM 2305925 00:05:05.113 13:59:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.113 13:59:21 -- nvmf/common.sh@7 -- # uname -s 00:05:05.113 13:59:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.113 13:59:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.113 13:59:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.113 13:59:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.113 13:59:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.113 13:59:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.113 13:59:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.113 13:59:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.113 13:59:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.113 13:59:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.113 13:59:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:05.113 13:59:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:05.113 13:59:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.113 13:59:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.113 13:59:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:05.113 13:59:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.113 13:59:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.113 13:59:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.113 13:59:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.113 13:59:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.113 13:59:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.113 13:59:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.113 13:59:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.113 13:59:21 -- paths/export.sh@5 -- # export PATH 00:05:05.114 13:59:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.114 13:59:21 -- nvmf/common.sh@47 -- # : 0 00:05:05.114 13:59:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:05.114 13:59:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:05.114 13:59:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.114 13:59:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.114 13:59:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.114 13:59:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:05.114 13:59:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:05.114 13:59:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:05.114 13:59:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:05.114 13:59:21 -- spdk/autotest.sh@32 -- # uname -s 00:05:05.114 13:59:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:05.114 13:59:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:05.114 13:59:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:05.114 13:59:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:05.114 13:59:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:05.114 13:59:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:05.114 13:59:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:05.114 13:59:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:05.114 13:59:21 -- spdk/autotest.sh@48 -- # udevadm_pid=2364656 00:05:05.114 13:59:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:05.114 13:59:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:05.114 13:59:21 -- pm/common@17 -- # local monitor 00:05:05.114 13:59:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.114 13:59:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.114 13:59:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.114 13:59:21 -- pm/common@21 -- # date +%s 00:05:05.114 13:59:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:05.114 13:59:21 -- pm/common@21 -- # date +%s 00:05:05.114 13:59:21 -- pm/common@25 -- # sleep 1 00:05:05.114 13:59:21 -- pm/common@21 -- # date +%s 00:05:05.114 13:59:21 -- pm/common@21 -- # date +%s 00:05:05.114 13:59:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721995161 00:05:05.114 13:59:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721995161 00:05:05.114 13:59:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721995161 00:05:05.114 13:59:21 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721995161 00:05:05.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721995161_collect-vmstat.pm.log 00:05:05.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721995161_collect-cpu-load.pm.log 00:05:05.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721995161_collect-cpu-temp.pm.log 00:05:05.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721995161_collect-bmc-pm.bmc.pm.log 00:05:06.052 13:59:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:06.052 13:59:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:06.052 13:59:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.052 13:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:06.052 13:59:22 -- spdk/autotest.sh@59 -- # create_test_list 00:05:06.052 13:59:22 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:06.052 13:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:06.052 13:59:22 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:06.052 13:59:22 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.052 13:59:22 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.052 13:59:22 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:06.052 13:59:22 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.052 13:59:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:06.052 13:59:22 -- common/autotest_common.sh@1455 -- # uname 00:05:06.052 13:59:22 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:06.052 13:59:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:06.052 13:59:22 -- common/autotest_common.sh@1475 -- # uname 00:05:06.052 13:59:22 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:06.052 13:59:22 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:06.052 13:59:22 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:06.052 13:59:22 -- spdk/autotest.sh@72 -- # hash lcov 00:05:06.052 13:59:22 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:06.052 13:59:22 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:06.052 --rc lcov_branch_coverage=1 00:05:06.052 --rc lcov_function_coverage=1 00:05:06.052 --rc genhtml_branch_coverage=1 00:05:06.052 --rc genhtml_function_coverage=1 00:05:06.052 --rc genhtml_legend=1 00:05:06.052 --rc geninfo_all_blocks=1 00:05:06.052 ' 00:05:06.052 13:59:22 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:06.052 --rc lcov_branch_coverage=1 00:05:06.052 --rc lcov_function_coverage=1 00:05:06.052 --rc genhtml_branch_coverage=1 00:05:06.052 --rc genhtml_function_coverage=1 00:05:06.052 --rc genhtml_legend=1 00:05:06.052 --rc geninfo_all_blocks=1 00:05:06.052 ' 00:05:06.052 13:59:22 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:06.052 --rc lcov_branch_coverage=1 00:05:06.052 --rc lcov_function_coverage=1 00:05:06.052 --rc genhtml_branch_coverage=1 00:05:06.052 --rc genhtml_function_coverage=1 00:05:06.052 --rc genhtml_legend=1 00:05:06.052 --rc geninfo_all_blocks=1 00:05:06.052 --no-external' 00:05:06.052 13:59:22 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:06.052 --rc lcov_branch_coverage=1 00:05:06.052 --rc lcov_function_coverage=1 00:05:06.052 --rc genhtml_branch_coverage=1 00:05:06.052 --rc genhtml_function_coverage=1 00:05:06.052 --rc genhtml_legend=1 00:05:06.052 --rc geninfo_all_blocks=1 00:05:06.052 --no-external' 00:05:06.052 13:59:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:06.311 lcov: LCOV version 1.14 00:05:06.311 13:59:23 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:32.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:32.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:54.835 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:54.835 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:54.835 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:54.835 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:54.835 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:54.835 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:54.835 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:54.835 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:54.836 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:54.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:54.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:54.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:54.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:54.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:54.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:54.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:54.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:54.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:54.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:54.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:54.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:06:00.098 14:00:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:00.098 14:00:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:00.098 14:00:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.098 14:00:16 -- spdk/autotest.sh@91 -- # rm -f 00:06:00.098 14:00:16 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:02.000 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:06:02.000 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:06:02.000 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:06:02.000 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:06:02.000 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:06:02.000 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:06:02.000 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:06:02.000 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:06:02.000 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:06:02.000 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:06:02.000 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:06:02.000 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:06:02.000 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:06:02.000 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:06:02.000 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:06:02.000 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:06:02.000 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:06:02.000 14:00:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:02.000 14:00:18 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:02.000 14:00:18 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:02.000 14:00:18 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:02.000 14:00:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.000 14:00:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:02.000 14:00:18 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:02.000 14:00:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:02.000 14:00:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.000 14:00:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:02.000 14:00:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:02.000 14:00:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:02.000 14:00:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:02.000 14:00:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:02.000 14:00:18 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:02.000 No valid GPT data, bailing 00:06:02.259 14:00:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:02.259 14:00:18 -- scripts/common.sh@391 -- # pt= 00:06:02.259 14:00:18 -- scripts/common.sh@392 -- # return 1 00:06:02.259 14:00:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:02.259 1+0 records in 00:06:02.259 1+0 records out 00:06:02.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048426 s, 217 MB/s 00:06:02.259 14:00:18 -- spdk/autotest.sh@118 -- # sync 00:06:02.259 14:00:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:02.259 14:00:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:02.259 14:00:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:04.799 14:00:21 -- spdk/autotest.sh@124 -- # uname -s 00:06:04.799 14:00:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:04.799 14:00:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:04.799 14:00:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.799 14:00:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.799 14:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:04.799 ************************************ 00:06:04.799 START TEST setup.sh 00:06:04.799 ************************************ 00:06:04.799 14:00:21 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:04.799 * Looking for test storage... 00:06:04.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:04.800 14:00:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:04.800 14:00:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:04.800 14:00:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:04.800 14:00:21 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.800 14:00:21 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.800 14:00:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:04.800 ************************************ 00:06:04.800 START TEST acl 00:06:04.800 ************************************ 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:04.800 * Looking for test storage... 00:06:04.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:04.800 14:00:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:04.800 14:00:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:04.800 14:00:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:04.800 14:00:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:04.800 14:00:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:04.800 14:00:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:04.800 14:00:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:04.800 14:00:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:04.800 14:00:21 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:06.709 14:00:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:06.709 14:00:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:06.709 14:00:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:06.709 14:00:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:06.709 14:00:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.709 14:00:23 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:08.141 Hugepages 00:06:08.141 node hugesize free / total 00:06:08.141 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 00:06:08.142 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:08.142 14:00:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:08.142 14:00:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:08.142 14:00:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:08.142 14:00:25 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.142 14:00:25 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.142 14:00:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:08.400 ************************************ 00:06:08.400 START TEST denied 00:06:08.400 ************************************ 00:06:08.400 14:00:25 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:06:08.400 14:00:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:06:08.400 14:00:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:08.401 14:00:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:06:08.401 14:00:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.401 14:00:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:10.312 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:10.312 14:00:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:13.597 00:06:13.597 real 0m4.748s 00:06:13.597 user 0m1.446s 00:06:13.597 sys 0m2.423s 00:06:13.597 14:00:29 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.597 14:00:29 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:13.597 ************************************ 00:06:13.597 END TEST denied 00:06:13.597 ************************************ 00:06:13.597 14:00:29 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:13.597 14:00:29 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.597 14:00:29 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.597 14:00:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:13.597 ************************************ 00:06:13.597 START TEST allowed 00:06:13.597 ************************************ 00:06:13.597 14:00:29 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:06:13.597 14:00:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:06:13.597 14:00:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:13.597 14:00:29 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:06:13.597 14:00:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.597 14:00:29 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:16.141 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:16.141 14:00:32 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:06:16.141 14:00:32 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:16.141 14:00:32 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:16.141 14:00:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:16.141 14:00:32 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:17.516 00:06:17.516 real 0m4.522s 00:06:17.516 user 0m1.161s 00:06:17.516 sys 0m2.268s 00:06:17.516 14:00:34 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.516 14:00:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:17.516 ************************************ 00:06:17.516 END TEST allowed 00:06:17.516 ************************************ 00:06:17.775 00:06:17.775 real 0m12.897s 00:06:17.775 user 0m4.040s 00:06:17.775 sys 0m6.988s 00:06:17.775 14:00:34 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.775 14:00:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:17.775 ************************************ 00:06:17.775 END TEST acl 00:06:17.775 ************************************ 00:06:17.775 14:00:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:06:17.775 14:00:34 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.775 14:00:34 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.775 14:00:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:17.775 ************************************ 00:06:17.775 START TEST hugepages 00:06:17.775 ************************************ 00:06:17.775 14:00:34 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:06:17.775 * Looking for test storage... 00:06:17.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 27148460 kB' 'MemAvailable: 30729724 kB' 'Buffers: 2704 kB' 'Cached: 10214124 kB' 'SwapCached: 0 kB' 'Active: 7226920 kB' 'Inactive: 3506828 kB' 'Active(anon): 6832096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519956 kB' 'Mapped: 200768 kB' 'Shmem: 6315176 kB' 'KReclaimable: 183304 kB' 'Slab: 540204 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356900 kB' 'KernelStack: 12400 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304780 kB' 'Committed_AS: 7961440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.776 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:17.777 14:00:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:17.777 14:00:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.777 14:00:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.777 14:00:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:17.777 ************************************ 00:06:17.777 START TEST default_setup 00:06:17.777 ************************************ 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.777 14:00:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:19.681 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:19.681 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:19.681 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:19.681 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:19.681 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:19.681 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:19.681 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:19.681 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:19.681 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:19.681 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:19.681 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:19.681 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:19.681 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:19.681 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:19.681 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:19.681 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:20.247 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:20.508 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29250124 kB' 'MemAvailable: 32831388 kB' 'Buffers: 2704 kB' 'Cached: 10214220 kB' 'SwapCached: 0 kB' 'Active: 7245612 kB' 'Inactive: 3506828 kB' 'Active(anon): 6850788 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538812 kB' 'Mapped: 200824 kB' 'Shmem: 6315272 kB' 'KReclaimable: 183304 kB' 'Slab: 539680 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356376 kB' 'KernelStack: 12416 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7981872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.509 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.510 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29250816 kB' 'MemAvailable: 32832080 kB' 'Buffers: 2704 kB' 'Cached: 10214224 kB' 'SwapCached: 0 kB' 'Active: 7245680 kB' 'Inactive: 3506828 kB' 'Active(anon): 6850856 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538936 kB' 'Mapped: 200764 kB' 'Shmem: 6315276 kB' 'KReclaimable: 183304 kB' 'Slab: 539704 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356400 kB' 'KernelStack: 12448 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7981892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.511 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.512 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29250816 kB' 'MemAvailable: 32832080 kB' 'Buffers: 2704 kB' 'Cached: 10214240 kB' 'SwapCached: 0 kB' 'Active: 7245908 kB' 'Inactive: 3506828 kB' 'Active(anon): 6851084 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539144 kB' 'Mapped: 200764 kB' 'Shmem: 6315292 kB' 'KReclaimable: 183304 kB' 'Slab: 539704 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356400 kB' 'KernelStack: 12448 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7981912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.513 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.514 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.782 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:20.783 nr_hugepages=1024 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:20.783 resv_hugepages=0 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:20.783 surplus_hugepages=0 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:20.783 anon_hugepages=0 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29250816 kB' 'MemAvailable: 32832080 kB' 'Buffers: 2704 kB' 'Cached: 10214264 kB' 'SwapCached: 0 kB' 'Active: 7245880 kB' 'Inactive: 3506828 kB' 'Active(anon): 6851056 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539144 kB' 'Mapped: 200764 kB' 'Shmem: 6315316 kB' 'KReclaimable: 183304 kB' 'Slab: 539704 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356400 kB' 'KernelStack: 12448 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7981936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.783 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12684728 kB' 'MemUsed: 11934684 kB' 'SwapCached: 0 kB' 'Active: 5845064 kB' 'Inactive: 3329964 kB' 'Active(anon): 5586176 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8809476 kB' 'Mapped: 126788 kB' 'AnonPages: 368832 kB' 'Shmem: 5220624 kB' 'KernelStack: 7704 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118544 kB' 'Slab: 281592 kB' 'SReclaimable: 118544 kB' 'SUnreclaim: 163048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:20.784 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:20.785 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:20.785 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:20.785 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:20.785 node0=1024 expecting 1024 00:06:20.785 14:00:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:20.785 00:06:20.785 real 0m2.881s 00:06:20.785 user 0m0.842s 00:06:20.785 sys 0m1.188s 00:06:20.785 14:00:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.785 14:00:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:20.785 ************************************ 00:06:20.785 END TEST default_setup 00:06:20.785 ************************************ 00:06:20.785 14:00:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:20.785 14:00:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.785 14:00:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.785 14:00:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:20.785 ************************************ 00:06:20.785 START TEST per_node_1G_alloc 00:06:20.785 ************************************ 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:20.785 14:00:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:22.692 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:22.692 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:22.692 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:22.692 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:22.692 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:22.692 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:22.692 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:22.692 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:22.692 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:22.692 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:22.692 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:22.692 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:22.692 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:22.692 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:22.692 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:22.692 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:22.692 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29235888 kB' 'MemAvailable: 32817152 kB' 'Buffers: 2704 kB' 'Cached: 10214336 kB' 'SwapCached: 0 kB' 'Active: 7246092 kB' 'Inactive: 3506828 kB' 'Active(anon): 6851268 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538712 kB' 'Mapped: 200844 kB' 'Shmem: 6315388 kB' 'KReclaimable: 183304 kB' 'Slab: 539784 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356480 kB' 'KernelStack: 12432 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7982276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.692 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.693 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29237908 kB' 'MemAvailable: 32819172 kB' 'Buffers: 2704 kB' 'Cached: 10214340 kB' 'SwapCached: 0 kB' 'Active: 7246004 kB' 'Inactive: 3506828 kB' 'Active(anon): 6851180 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539060 kB' 'Mapped: 200764 kB' 'Shmem: 6315392 kB' 'KReclaimable: 183304 kB' 'Slab: 539740 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356436 kB' 'KernelStack: 12480 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7982296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.694 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.695 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29237908 kB' 'MemAvailable: 32819172 kB' 'Buffers: 2704 kB' 'Cached: 10214356 kB' 'SwapCached: 0 kB' 'Active: 7246044 kB' 'Inactive: 3506828 kB' 'Active(anon): 6851220 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539064 kB' 'Mapped: 200764 kB' 'Shmem: 6315408 kB' 'KReclaimable: 183304 kB' 'Slab: 539740 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356436 kB' 'KernelStack: 12480 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7982316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.696 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.697 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:22.698 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:22.698 nr_hugepages=1024 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:22.699 resv_hugepages=0 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:22.699 surplus_hugepages=0 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:22.699 anon_hugepages=0 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29237976 kB' 'MemAvailable: 32819240 kB' 'Buffers: 2704 kB' 'Cached: 10214380 kB' 'SwapCached: 0 kB' 'Active: 7246408 kB' 'Inactive: 3506828 kB' 'Active(anon): 6851584 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539464 kB' 'Mapped: 200764 kB' 'Shmem: 6315432 kB' 'KReclaimable: 183304 kB' 'Slab: 539740 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356436 kB' 'KernelStack: 12448 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7983576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.699 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.700 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13723072 kB' 'MemUsed: 10896340 kB' 'SwapCached: 0 kB' 'Active: 5845960 kB' 'Inactive: 3329964 kB' 'Active(anon): 5587072 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8809488 kB' 'Mapped: 126788 kB' 'AnonPages: 369588 kB' 'Shmem: 5220636 kB' 'KernelStack: 7912 kB' 'PageTables: 5296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118544 kB' 'Slab: 281624 kB' 'SReclaimable: 118544 kB' 'SUnreclaim: 163080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.701 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.702 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15511384 kB' 'MemUsed: 3895860 kB' 'SwapCached: 0 kB' 'Active: 1401000 kB' 'Inactive: 176864 kB' 'Active(anon): 1265064 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 176864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1407640 kB' 'Mapped: 73976 kB' 'AnonPages: 170316 kB' 'Shmem: 1094840 kB' 'KernelStack: 4792 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64760 kB' 'Slab: 258116 kB' 'SReclaimable: 64760 kB' 'SUnreclaim: 193356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.703 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:22.963 node0=512 expecting 512 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:22.963 node1=512 expecting 512 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:22.963 00:06:22.963 real 0m2.022s 00:06:22.963 user 0m0.873s 00:06:22.963 sys 0m1.129s 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.963 14:00:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:22.963 ************************************ 00:06:22.963 END TEST per_node_1G_alloc 00:06:22.963 ************************************ 00:06:22.963 14:00:39 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:22.963 14:00:39 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.963 14:00:39 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.963 14:00:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:22.963 ************************************ 00:06:22.963 START TEST even_2G_alloc 00:06:22.963 ************************************ 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:06:22.963 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:22.964 14:00:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:24.342 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:24.342 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:24.342 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:24.342 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:24.342 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:24.342 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:24.342 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:24.342 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:24.342 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:24.342 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:24.342 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:24.342 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:24.342 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:24.342 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:24.342 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:24.342 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:24.342 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29247312 kB' 'MemAvailable: 32828576 kB' 'Buffers: 2704 kB' 'Cached: 10214468 kB' 'SwapCached: 0 kB' 'Active: 7244920 kB' 'Inactive: 3506828 kB' 'Active(anon): 6850096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537728 kB' 'Mapped: 200168 kB' 'Shmem: 6315520 kB' 'KReclaimable: 183304 kB' 'Slab: 539584 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356280 kB' 'KernelStack: 12368 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7972596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.608 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.609 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29249676 kB' 'MemAvailable: 32830940 kB' 'Buffers: 2704 kB' 'Cached: 10214468 kB' 'SwapCached: 0 kB' 'Active: 7247844 kB' 'Inactive: 3506828 kB' 'Active(anon): 6853020 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540692 kB' 'Mapped: 200164 kB' 'Shmem: 6315520 kB' 'KReclaimable: 183304 kB' 'Slab: 539584 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356280 kB' 'KernelStack: 12416 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7982180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.610 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.611 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29250144 kB' 'MemAvailable: 32831408 kB' 'Buffers: 2704 kB' 'Cached: 10214484 kB' 'SwapCached: 0 kB' 'Active: 7248492 kB' 'Inactive: 3506828 kB' 'Active(anon): 6853668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541312 kB' 'Mapped: 200540 kB' 'Shmem: 6315536 kB' 'KReclaimable: 183304 kB' 'Slab: 539608 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356304 kB' 'KernelStack: 12368 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7975580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195652 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.612 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:24.613 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:24.613 nr_hugepages=1024 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:24.614 resv_hugepages=0 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:24.614 surplus_hugepages=0 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:24.614 anon_hugepages=0 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29250144 kB' 'MemAvailable: 32831408 kB' 'Buffers: 2704 kB' 'Cached: 10214508 kB' 'SwapCached: 0 kB' 'Active: 7243000 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848176 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535816 kB' 'Mapped: 200540 kB' 'Shmem: 6315560 kB' 'KReclaimable: 183304 kB' 'Slab: 539608 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356304 kB' 'KernelStack: 12384 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7970576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.614 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:24.615 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13729424 kB' 'MemUsed: 10889988 kB' 'SwapCached: 0 kB' 'Active: 5844212 kB' 'Inactive: 3329964 kB' 'Active(anon): 5585324 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8809488 kB' 'Mapped: 126072 kB' 'AnonPages: 367828 kB' 'Shmem: 5220636 kB' 'KernelStack: 7640 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118544 kB' 'Slab: 281584 kB' 'SReclaimable: 118544 kB' 'SUnreclaim: 163040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.616 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15520948 kB' 'MemUsed: 3886296 kB' 'SwapCached: 0 kB' 'Active: 1400344 kB' 'Inactive: 176864 kB' 'Active(anon): 1264408 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 176864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1407756 kB' 'Mapped: 74108 kB' 'AnonPages: 169560 kB' 'Shmem: 1094956 kB' 'KernelStack: 4728 kB' 'PageTables: 3160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64760 kB' 'Slab: 258024 kB' 'SReclaimable: 64760 kB' 'SUnreclaim: 193264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.617 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.618 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:24.619 node0=512 expecting 512 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:24.619 node1=512 expecting 512 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:24.619 00:06:24.619 real 0m1.796s 00:06:24.619 user 0m0.768s 00:06:24.619 sys 0m1.008s 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.619 14:00:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:24.619 ************************************ 00:06:24.619 END TEST even_2G_alloc 00:06:24.619 ************************************ 00:06:24.619 14:00:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:24.619 14:00:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.619 14:00:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.619 14:00:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:24.877 ************************************ 00:06:24.877 START TEST odd_alloc 00:06:24.877 ************************************ 00:06:24.877 14:00:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:06:24.877 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:24.877 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:24.877 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:24.877 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:24.877 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:24.877 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:24.878 14:00:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:26.255 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:26.255 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:26.255 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:26.255 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:26.255 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:26.255 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:26.255 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:26.255 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:26.255 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:26.255 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:26.255 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:26.255 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:26.255 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:26.255 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:26.255 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:26.255 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:26.255 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29244076 kB' 'MemAvailable: 32825340 kB' 'Buffers: 2704 kB' 'Cached: 10214612 kB' 'SwapCached: 0 kB' 'Active: 7243892 kB' 'Inactive: 3506828 kB' 'Active(anon): 6849068 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536632 kB' 'Mapped: 199832 kB' 'Shmem: 6315664 kB' 'KReclaimable: 183304 kB' 'Slab: 539916 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356612 kB' 'KernelStack: 12464 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7970200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.518 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29244480 kB' 'MemAvailable: 32825744 kB' 'Buffers: 2704 kB' 'Cached: 10214616 kB' 'SwapCached: 0 kB' 'Active: 7243420 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848596 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536136 kB' 'Mapped: 199760 kB' 'Shmem: 6315668 kB' 'KReclaimable: 183304 kB' 'Slab: 539916 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356612 kB' 'KernelStack: 12448 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7970220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.519 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29244848 kB' 'MemAvailable: 32826112 kB' 'Buffers: 2704 kB' 'Cached: 10214616 kB' 'SwapCached: 0 kB' 'Active: 7243136 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848312 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535848 kB' 'Mapped: 199760 kB' 'Shmem: 6315668 kB' 'KReclaimable: 183304 kB' 'Slab: 539916 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356612 kB' 'KernelStack: 12448 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7970240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.520 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:26.521 nr_hugepages=1025 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:26.521 resv_hugepages=0 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:26.521 surplus_hugepages=0 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:26.521 anon_hugepages=0 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29244596 kB' 'MemAvailable: 32825860 kB' 'Buffers: 2704 kB' 'Cached: 10214652 kB' 'SwapCached: 0 kB' 'Active: 7243456 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848632 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536136 kB' 'Mapped: 199760 kB' 'Shmem: 6315704 kB' 'KReclaimable: 183304 kB' 'Slab: 539916 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356612 kB' 'KernelStack: 12448 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7970260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.521 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13719020 kB' 'MemUsed: 10900392 kB' 'SwapCached: 0 kB' 'Active: 5842544 kB' 'Inactive: 3329964 kB' 'Active(anon): 5583656 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8809496 kB' 'Mapped: 126088 kB' 'AnonPages: 366116 kB' 'Shmem: 5220644 kB' 'KernelStack: 7704 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118544 kB' 'Slab: 281708 kB' 'SReclaimable: 118544 kB' 'SUnreclaim: 163164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.522 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15525296 kB' 'MemUsed: 3881948 kB' 'SwapCached: 0 kB' 'Active: 1400920 kB' 'Inactive: 176864 kB' 'Active(anon): 1264984 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 176864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1407904 kB' 'Mapped: 73672 kB' 'AnonPages: 170020 kB' 'Shmem: 1095104 kB' 'KernelStack: 4744 kB' 'PageTables: 3296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64760 kB' 'Slab: 258208 kB' 'SReclaimable: 64760 kB' 'SUnreclaim: 193448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.523 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.524 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.782 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.782 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.782 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.782 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.782 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.782 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.782 14:00:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:26.783 node0=512 expecting 513 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:26.783 node1=513 expecting 512 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:26.783 00:06:26.783 real 0m1.894s 00:06:26.783 user 0m0.779s 00:06:26.783 sys 0m1.094s 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.783 14:00:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:26.783 ************************************ 00:06:26.783 END TEST odd_alloc 00:06:26.783 ************************************ 00:06:26.783 14:00:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:26.783 14:00:43 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.783 14:00:43 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.783 14:00:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:26.783 ************************************ 00:06:26.783 START TEST custom_alloc 00:06:26.783 ************************************ 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:26.783 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:26.784 14:00:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:28.159 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:28.159 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:28.159 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:28.159 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:28.159 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:28.159 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:28.159 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:28.159 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:28.159 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:28.159 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:28.159 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:28.159 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:28.159 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:28.159 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:28.159 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:28.159 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:28.159 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:28.422 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28210040 kB' 'MemAvailable: 31791304 kB' 'Buffers: 2704 kB' 'Cached: 10214748 kB' 'SwapCached: 0 kB' 'Active: 7243788 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848964 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536392 kB' 'Mapped: 199900 kB' 'Shmem: 6315800 kB' 'KReclaimable: 183304 kB' 'Slab: 539936 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356632 kB' 'KernelStack: 12480 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7970464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.423 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28210648 kB' 'MemAvailable: 31791912 kB' 'Buffers: 2704 kB' 'Cached: 10214752 kB' 'SwapCached: 0 kB' 'Active: 7243700 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848876 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536272 kB' 'Mapped: 199768 kB' 'Shmem: 6315804 kB' 'KReclaimable: 183304 kB' 'Slab: 539904 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356600 kB' 'KernelStack: 12448 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7970484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.424 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.425 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28211336 kB' 'MemAvailable: 31792600 kB' 'Buffers: 2704 kB' 'Cached: 10214768 kB' 'SwapCached: 0 kB' 'Active: 7243712 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536316 kB' 'Mapped: 199768 kB' 'Shmem: 6315820 kB' 'KReclaimable: 183304 kB' 'Slab: 539904 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356600 kB' 'KernelStack: 12464 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7970504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.426 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.427 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:06:28.428 nr_hugepages=1536 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:28.428 resv_hugepages=0 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:28.428 surplus_hugepages=0 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:28.428 anon_hugepages=0 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.428 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28211084 kB' 'MemAvailable: 31792348 kB' 'Buffers: 2704 kB' 'Cached: 10214788 kB' 'SwapCached: 0 kB' 'Active: 7243712 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536312 kB' 'Mapped: 199768 kB' 'Shmem: 6315840 kB' 'KReclaimable: 183304 kB' 'Slab: 539896 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356592 kB' 'KernelStack: 12464 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7970524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.429 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13730552 kB' 'MemUsed: 10888860 kB' 'SwapCached: 0 kB' 'Active: 5842512 kB' 'Inactive: 3329964 kB' 'Active(anon): 5583624 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8809500 kB' 'Mapped: 126080 kB' 'AnonPages: 366060 kB' 'Shmem: 5220648 kB' 'KernelStack: 7688 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118544 kB' 'Slab: 281664 kB' 'SReclaimable: 118544 kB' 'SUnreclaim: 163120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.430 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.431 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 14481908 kB' 'MemUsed: 4925336 kB' 'SwapCached: 0 kB' 'Active: 1401508 kB' 'Inactive: 176864 kB' 'Active(anon): 1265572 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 176864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1408032 kB' 'Mapped: 73672 kB' 'AnonPages: 170548 kB' 'Shmem: 1095232 kB' 'KernelStack: 4824 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64760 kB' 'Slab: 258224 kB' 'SReclaimable: 64760 kB' 'SUnreclaim: 193464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.432 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.433 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.691 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.691 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.691 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.691 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.691 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.691 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.691 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:28.692 node0=512 expecting 512 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:06:28.692 node1=1024 expecting 1024 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:28.692 00:06:28.692 real 0m1.863s 00:06:28.692 user 0m0.789s 00:06:28.692 sys 0m1.059s 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.692 14:00:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:28.692 ************************************ 00:06:28.692 END TEST custom_alloc 00:06:28.692 ************************************ 00:06:28.692 14:00:45 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:28.692 14:00:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.692 14:00:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.692 14:00:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:28.692 ************************************ 00:06:28.692 START TEST no_shrink_alloc 00:06:28.692 ************************************ 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.692 14:00:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:30.068 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:30.068 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:30.068 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:30.068 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:30.068 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:30.068 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:30.068 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:30.068 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:30.068 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:30.068 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:30.068 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:30.068 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:30.068 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:30.068 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:30.068 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:30.068 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:30.068 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.068 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29234788 kB' 'MemAvailable: 32816052 kB' 'Buffers: 2704 kB' 'Cached: 10214868 kB' 'SwapCached: 0 kB' 'Active: 7244484 kB' 'Inactive: 3506828 kB' 'Active(anon): 6849660 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536916 kB' 'Mapped: 199840 kB' 'Shmem: 6315920 kB' 'KReclaimable: 183304 kB' 'Slab: 539784 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356480 kB' 'KernelStack: 12528 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7970220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.069 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.070 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29235552 kB' 'MemAvailable: 32816816 kB' 'Buffers: 2704 kB' 'Cached: 10214872 kB' 'SwapCached: 0 kB' 'Active: 7243568 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848744 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536040 kB' 'Mapped: 199776 kB' 'Shmem: 6315924 kB' 'KReclaimable: 183304 kB' 'Slab: 539784 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356480 kB' 'KernelStack: 12448 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7970368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.071 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.334 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.335 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29235844 kB' 'MemAvailable: 32817108 kB' 'Buffers: 2704 kB' 'Cached: 10214892 kB' 'SwapCached: 0 kB' 'Active: 7243556 kB' 'Inactive: 3506828 kB' 'Active(anon): 6848732 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536044 kB' 'Mapped: 199776 kB' 'Shmem: 6315944 kB' 'KReclaimable: 183304 kB' 'Slab: 539784 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356480 kB' 'KernelStack: 12448 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7970392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.336 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.337 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.338 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:30.339 nr_hugepages=1024 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:30.339 resv_hugepages=0 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:30.339 surplus_hugepages=0 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:30.339 anon_hugepages=0 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29237172 kB' 'MemAvailable: 32818436 kB' 'Buffers: 2704 kB' 'Cached: 10214916 kB' 'SwapCached: 0 kB' 'Active: 7244040 kB' 'Inactive: 3506828 kB' 'Active(anon): 6849216 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536544 kB' 'Mapped: 199768 kB' 'Shmem: 6315968 kB' 'KReclaimable: 183304 kB' 'Slab: 539784 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356480 kB' 'KernelStack: 12480 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7970780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.339 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.340 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:30.341 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12658828 kB' 'MemUsed: 11960584 kB' 'SwapCached: 0 kB' 'Active: 5842296 kB' 'Inactive: 3329964 kB' 'Active(anon): 5583408 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8809588 kB' 'Mapped: 126096 kB' 'AnonPages: 365844 kB' 'Shmem: 5220736 kB' 'KernelStack: 7720 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118544 kB' 'Slab: 281704 kB' 'SReclaimable: 118544 kB' 'SUnreclaim: 163160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.342 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:30.343 node0=1024 expecting 1024 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:30.343 14:00:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:31.718 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:31.719 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:31.719 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:31.719 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:31.719 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:31.719 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:31.719 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:31.719 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:31.719 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:31.719 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:31.719 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:31.719 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:31.719 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:31.719 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:31.719 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:31.719 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:31.719 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:31.719 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29223744 kB' 'MemAvailable: 32805008 kB' 'Buffers: 2704 kB' 'Cached: 10214988 kB' 'SwapCached: 0 kB' 'Active: 7250040 kB' 'Inactive: 3506828 kB' 'Active(anon): 6855216 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542332 kB' 'Mapped: 200724 kB' 'Shmem: 6316040 kB' 'KReclaimable: 183304 kB' 'Slab: 539736 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356432 kB' 'KernelStack: 12464 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7977244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195876 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.719 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.720 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.720 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.720 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.720 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.720 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.720 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.720 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.720 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.981 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29224420 kB' 'MemAvailable: 32805684 kB' 'Buffers: 2704 kB' 'Cached: 10214992 kB' 'SwapCached: 0 kB' 'Active: 7246348 kB' 'Inactive: 3506828 kB' 'Active(anon): 6851524 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538676 kB' 'Mapped: 200204 kB' 'Shmem: 6316044 kB' 'KReclaimable: 183304 kB' 'Slab: 539756 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356452 kB' 'KernelStack: 12480 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7973948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.982 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29225116 kB' 'MemAvailable: 32806380 kB' 'Buffers: 2704 kB' 'Cached: 10215012 kB' 'SwapCached: 0 kB' 'Active: 7249592 kB' 'Inactive: 3506828 kB' 'Active(anon): 6854768 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541920 kB' 'Mapped: 200204 kB' 'Shmem: 6316064 kB' 'KReclaimable: 183304 kB' 'Slab: 539756 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356452 kB' 'KernelStack: 12512 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7977284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195860 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.983 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:31.984 nr_hugepages=1024 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:31.984 resv_hugepages=0 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:31.984 surplus_hugepages=0 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:31.984 anon_hugepages=0 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:31.984 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29225116 kB' 'MemAvailable: 32806380 kB' 'Buffers: 2704 kB' 'Cached: 10215032 kB' 'SwapCached: 0 kB' 'Active: 7244148 kB' 'Inactive: 3506828 kB' 'Active(anon): 6849324 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536460 kB' 'Mapped: 199788 kB' 'Shmem: 6316084 kB' 'KReclaimable: 183304 kB' 'Slab: 539752 kB' 'SReclaimable: 183304 kB' 'SUnreclaim: 356448 kB' 'KernelStack: 12480 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7971184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 14891008 kB' 'DirectMap1G: 35651584 kB' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.985 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12650316 kB' 'MemUsed: 11969096 kB' 'SwapCached: 0 kB' 'Active: 5842728 kB' 'Inactive: 3329964 kB' 'Active(anon): 5583840 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8809712 kB' 'Mapped: 126116 kB' 'AnonPages: 366140 kB' 'Shmem: 5220860 kB' 'KernelStack: 7736 kB' 'PageTables: 4660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118544 kB' 'Slab: 281768 kB' 'SReclaimable: 118544 kB' 'SUnreclaim: 163224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:31.986 node0=1024 expecting 1024 00:06:31.986 14:00:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:31.986 00:06:31.986 real 0m3.426s 00:06:31.987 user 0m1.409s 00:06:31.987 sys 0m1.967s 00:06:31.987 14:00:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.987 14:00:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:31.987 ************************************ 00:06:31.987 END TEST no_shrink_alloc 00:06:31.987 ************************************ 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:31.987 14:00:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:31.987 00:06:31.987 real 0m14.370s 00:06:31.987 user 0m5.670s 00:06:31.987 sys 0m7.751s 00:06:31.987 14:00:48 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.987 14:00:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:31.987 ************************************ 00:06:31.987 END TEST hugepages 00:06:31.987 ************************************ 00:06:31.987 14:00:48 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:31.987 14:00:48 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.987 14:00:48 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.987 14:00:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:32.243 ************************************ 00:06:32.243 START TEST driver 00:06:32.243 ************************************ 00:06:32.243 14:00:48 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:32.243 * Looking for test storage... 00:06:32.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:32.243 14:00:48 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:32.243 14:00:48 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:32.243 14:00:48 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:35.527 14:00:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:35.527 14:00:51 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.527 14:00:51 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.527 14:00:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:35.527 ************************************ 00:06:35.527 START TEST guess_driver 00:06:35.527 ************************************ 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:35.527 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:35.527 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:35.527 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:35.527 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:35.527 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:35.527 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:35.527 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:35.527 Looking for driver=vfio-pci 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:35.527 14:00:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:36.905 14:00:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:37.838 14:00:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:37.838 14:00:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:37.838 14:00:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:37.838 14:00:54 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:37.838 14:00:54 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:37.838 14:00:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:37.838 14:00:54 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:41.125 00:06:41.125 real 0m5.550s 00:06:41.125 user 0m1.367s 00:06:41.125 sys 0m2.362s 00:06:41.125 14:00:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.125 14:00:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:41.125 ************************************ 00:06:41.125 END TEST guess_driver 00:06:41.125 ************************************ 00:06:41.125 00:06:41.125 real 0m8.533s 00:06:41.125 user 0m2.098s 00:06:41.125 sys 0m3.642s 00:06:41.125 14:00:57 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.125 14:00:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:41.125 ************************************ 00:06:41.125 END TEST driver 00:06:41.125 ************************************ 00:06:41.125 14:00:57 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:41.125 14:00:57 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.125 14:00:57 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.125 14:00:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:41.125 ************************************ 00:06:41.125 START TEST devices 00:06:41.125 ************************************ 00:06:41.125 14:00:57 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:41.125 * Looking for test storage... 00:06:41.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:41.125 14:00:57 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:41.125 14:00:57 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:41.125 14:00:57 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:41.125 14:00:57 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:42.501 14:00:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:42.501 14:00:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:42.501 14:00:59 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:42.501 14:00:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:42.501 14:00:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:42.501 14:00:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:42.501 14:00:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:42.501 14:00:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:06:42.501 14:00:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:42.501 14:00:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:42.501 14:00:59 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:42.760 No valid GPT data, bailing 00:06:42.760 14:00:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:42.760 14:00:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:42.760 14:00:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:42.760 14:00:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:42.760 14:00:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:42.760 14:00:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:42.760 14:00:59 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:06:42.760 14:00:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:06:42.760 14:00:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:42.760 14:00:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:06:42.760 14:00:59 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:42.760 14:00:59 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:42.760 14:00:59 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:42.760 14:00:59 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.760 14:00:59 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.760 14:00:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:42.760 ************************************ 00:06:42.760 START TEST nvme_mount 00:06:42.760 ************************************ 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:42.760 14:00:59 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:43.694 Creating new GPT entries in memory. 00:06:43.695 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:43.695 other utilities. 00:06:43.695 14:01:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:43.695 14:01:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:43.695 14:01:00 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:43.695 14:01:00 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:43.695 14:01:00 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:45.071 Creating new GPT entries in memory. 00:06:45.071 The operation has completed successfully. 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2388572 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:45.071 14:01:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.445 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:46.446 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:46.446 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:46.703 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:46.703 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:46.703 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:46.703 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:46.703 14:01:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:48.605 14:01:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:48.605 14:01:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:49.984 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:50.244 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:50.244 00:06:50.244 real 0m7.458s 00:06:50.244 user 0m1.813s 00:06:50.244 sys 0m3.214s 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.244 14:01:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:50.244 ************************************ 00:06:50.244 END TEST nvme_mount 00:06:50.244 ************************************ 00:06:50.244 14:01:06 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:50.244 14:01:06 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.244 14:01:06 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.244 14:01:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:50.244 ************************************ 00:06:50.244 START TEST dm_mount 00:06:50.244 ************************************ 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:50.244 14:01:07 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:51.180 Creating new GPT entries in memory. 00:06:51.180 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:51.180 other utilities. 00:06:51.180 14:01:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:51.180 14:01:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:51.180 14:01:08 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:51.180 14:01:08 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:51.180 14:01:08 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:52.556 Creating new GPT entries in memory. 00:06:52.556 The operation has completed successfully. 00:06:52.556 14:01:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:52.556 14:01:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:52.556 14:01:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:52.556 14:01:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:52.556 14:01:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:53.494 The operation has completed successfully. 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2390992 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:53.494 14:01:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:54.874 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:55.133 14:01:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:55.134 14:01:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:55.134 14:01:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:56.512 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:56.771 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:56.771 00:06:56.771 real 0m6.465s 00:06:56.771 user 0m1.136s 00:06:56.771 sys 0m2.214s 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.771 14:01:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:56.771 ************************************ 00:06:56.771 END TEST dm_mount 00:06:56.771 ************************************ 00:06:56.771 14:01:13 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:56.771 14:01:13 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:56.771 14:01:13 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:56.771 14:01:13 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:56.771 14:01:13 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:56.771 14:01:13 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:56.771 14:01:13 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:57.049 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:57.049 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:57.049 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:57.049 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:57.049 14:01:13 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:57.049 14:01:13 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:57.049 14:01:13 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:57.049 14:01:13 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:57.049 14:01:13 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:57.049 14:01:13 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:57.049 14:01:13 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:57.049 00:06:57.049 real 0m16.317s 00:06:57.049 user 0m3.754s 00:06:57.049 sys 0m6.811s 00:06:57.049 14:01:13 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.049 14:01:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:57.049 ************************************ 00:06:57.049 END TEST devices 00:06:57.049 ************************************ 00:06:57.049 00:06:57.049 real 0m52.431s 00:06:57.049 user 0m15.680s 00:06:57.049 sys 0m25.405s 00:06:57.049 14:01:13 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.049 14:01:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:57.049 ************************************ 00:06:57.049 END TEST setup.sh 00:06:57.049 ************************************ 00:06:57.049 14:01:13 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:58.960 Hugepages 00:06:58.960 node hugesize free / total 00:06:58.960 node0 1048576kB 0 / 0 00:06:58.960 node0 2048kB 2048 / 2048 00:06:58.960 node1 1048576kB 0 / 0 00:06:58.960 node1 2048kB 0 / 0 00:06:58.960 00:06:58.960 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:58.960 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:58.960 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:58.960 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:58.960 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:58.960 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:58.960 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:58.960 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:58.960 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:58.960 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:58.960 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:58.960 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:58.960 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:58.960 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:58.960 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:58.960 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:58.960 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:58.960 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:58.960 14:01:15 -- spdk/autotest.sh@130 -- # uname -s 00:06:58.960 14:01:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:58.960 14:01:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:58.960 14:01:15 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:00.339 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:00.339 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:00.339 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:00.339 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:00.339 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:00.339 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:00.339 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:00.339 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:00.339 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:00.339 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:00.339 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:00.339 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:00.339 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:00.339 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:00.339 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:00.339 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:01.278 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:07:01.278 14:01:18 -- common/autotest_common.sh@1532 -- # sleep 1 00:07:02.656 14:01:19 -- common/autotest_common.sh@1533 -- # bdfs=() 00:07:02.656 14:01:19 -- common/autotest_common.sh@1533 -- # local bdfs 00:07:02.656 14:01:19 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:07:02.656 14:01:19 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:07:02.656 14:01:19 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:02.656 14:01:19 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:02.656 14:01:19 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:02.656 14:01:19 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:02.656 14:01:19 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:02.656 14:01:19 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:07:02.656 14:01:19 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:07:02.656 14:01:19 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:03.594 Waiting for block devices as requested 00:07:03.852 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:07:03.852 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:04.109 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:04.109 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:04.109 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:04.109 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:04.368 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:04.368 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:04.368 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:04.368 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:04.627 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:04.627 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:04.627 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:04.885 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:04.885 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:04.885 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:04.885 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:05.143 14:01:21 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:05.143 14:01:21 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:07:05.143 14:01:21 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:07:05.143 14:01:21 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:07:05.143 14:01:21 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:07:05.143 14:01:21 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:07:05.143 14:01:21 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:07:05.143 14:01:21 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:07:05.143 14:01:21 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:07:05.143 14:01:21 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:07:05.143 14:01:21 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:07:05.143 14:01:21 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:05.143 14:01:21 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:05.143 14:01:21 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:07:05.143 14:01:21 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:05.143 14:01:21 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:05.143 14:01:21 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:07:05.143 14:01:21 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:05.143 14:01:21 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:05.143 14:01:21 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:05.143 14:01:21 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:05.143 14:01:21 -- common/autotest_common.sh@1557 -- # continue 00:07:05.143 14:01:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:05.143 14:01:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.143 14:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:05.143 14:01:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:05.144 14:01:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:05.144 14:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:05.144 14:01:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:07.048 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:07.048 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:07.048 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:07.048 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:07.048 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:07.048 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:07.048 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:07.048 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:07.048 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:07.048 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:07.048 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:07.048 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:07.048 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:07.048 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:07.048 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:07.048 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:07.984 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:07:07.984 14:01:24 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:07.984 14:01:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.984 14:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:07.984 14:01:24 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:07.984 14:01:24 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:07:07.984 14:01:24 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:07:07.984 14:01:24 -- common/autotest_common.sh@1577 -- # bdfs=() 00:07:07.984 14:01:24 -- common/autotest_common.sh@1577 -- # local bdfs 00:07:07.984 14:01:24 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:07:07.984 14:01:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:07.984 14:01:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:07.984 14:01:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:07.984 14:01:24 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:07.984 14:01:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:07.984 14:01:24 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:07:07.984 14:01:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:07:07.984 14:01:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:07.984 14:01:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:07:07.984 14:01:24 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:07:07.984 14:01:24 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:07.984 14:01:24 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:07:07.984 14:01:24 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:07:07.984 14:01:24 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:07:07.984 14:01:24 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2396480 00:07:07.984 14:01:24 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:07.984 14:01:24 -- common/autotest_common.sh@1598 -- # waitforlisten 2396480 00:07:07.984 14:01:24 -- common/autotest_common.sh@831 -- # '[' -z 2396480 ']' 00:07:07.984 14:01:24 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.984 14:01:24 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.984 14:01:24 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.985 14:01:24 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.985 14:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:08.243 [2024-07-26 14:01:24.925027] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:08.243 [2024-07-26 14:01:24.925196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396480 ] 00:07:08.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.243 [2024-07-26 14:01:25.017712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.500 [2024-07-26 14:01:25.140806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.757 14:01:25 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.757 14:01:25 -- common/autotest_common.sh@864 -- # return 0 00:07:08.758 14:01:25 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:07:08.758 14:01:25 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:07:08.758 14:01:25 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:07:12.039 nvme0n1 00:07:12.039 14:01:28 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:12.039 [2024-07-26 14:01:28.818354] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:07:12.039 [2024-07-26 14:01:28.818405] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:07:12.039 request: 00:07:12.039 { 00:07:12.039 "nvme_ctrlr_name": "nvme0", 00:07:12.039 "password": "test", 00:07:12.039 "method": "bdev_nvme_opal_revert", 00:07:12.039 "req_id": 1 00:07:12.039 } 00:07:12.039 Got JSON-RPC error response 00:07:12.039 response: 00:07:12.039 { 00:07:12.039 "code": -32603, 00:07:12.039 "message": "Internal error" 00:07:12.039 } 00:07:12.039 14:01:28 -- common/autotest_common.sh@1604 -- # true 00:07:12.039 14:01:28 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:07:12.039 14:01:28 -- common/autotest_common.sh@1608 -- # killprocess 2396480 00:07:12.039 14:01:28 -- common/autotest_common.sh@950 -- # '[' -z 2396480 ']' 00:07:12.039 14:01:28 -- common/autotest_common.sh@954 -- # kill -0 2396480 00:07:12.039 14:01:28 -- common/autotest_common.sh@955 -- # uname 00:07:12.039 14:01:28 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.039 14:01:28 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2396480 00:07:12.039 14:01:28 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.039 14:01:28 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.039 14:01:28 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2396480' 00:07:12.039 killing process with pid 2396480 00:07:12.039 14:01:28 -- common/autotest_common.sh@969 -- # kill 2396480 00:07:12.039 14:01:28 -- common/autotest_common.sh@974 -- # wait 2396480 00:07:13.965 14:01:30 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:13.965 14:01:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:13.965 14:01:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:13.965 14:01:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:13.965 14:01:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:13.965 14:01:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.965 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:07:13.965 14:01:30 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:13.965 14:01:30 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:13.965 14:01:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.965 14:01:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.965 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:07:13.965 ************************************ 00:07:13.965 START TEST env 00:07:13.965 ************************************ 00:07:13.965 14:01:30 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:13.965 * Looking for test storage... 00:07:13.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:13.965 14:01:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:13.965 14:01:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.965 14:01:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.965 14:01:30 env -- common/autotest_common.sh@10 -- # set +x 00:07:13.965 ************************************ 00:07:13.965 START TEST env_memory 00:07:13.965 ************************************ 00:07:13.965 14:01:30 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:13.965 00:07:13.965 00:07:13.965 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.965 http://cunit.sourceforge.net/ 00:07:13.965 00:07:13.965 00:07:13.965 Suite: memory 00:07:14.223 Test: alloc and free memory map ...[2024-07-26 14:01:30.859142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:14.223 passed 00:07:14.223 Test: mem map translation ...[2024-07-26 14:01:30.914916] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:14.223 [2024-07-26 14:01:30.914978] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:14.223 [2024-07-26 14:01:30.915094] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:14.223 [2024-07-26 14:01:30.915129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:14.223 passed 00:07:14.223 Test: mem map registration ...[2024-07-26 14:01:31.032453] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:14.223 [2024-07-26 14:01:31.032520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:14.223 passed 00:07:14.482 Test: mem map adjacent registrations ...passed 00:07:14.482 00:07:14.482 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.482 suites 1 1 n/a 0 0 00:07:14.482 tests 4 4 4 0 0 00:07:14.482 asserts 152 152 152 0 n/a 00:07:14.482 00:07:14.482 Elapsed time = 0.381 seconds 00:07:14.482 00:07:14.482 real 0m0.393s 00:07:14.482 user 0m0.371s 00:07:14.482 sys 0m0.020s 00:07:14.482 14:01:31 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.482 14:01:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:14.482 ************************************ 00:07:14.482 END TEST env_memory 00:07:14.482 ************************************ 00:07:14.482 14:01:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:14.482 14:01:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.482 14:01:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.482 14:01:31 env -- common/autotest_common.sh@10 -- # set +x 00:07:14.482 ************************************ 00:07:14.482 START TEST env_vtophys 00:07:14.482 ************************************ 00:07:14.482 14:01:31 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:14.482 EAL: lib.eal log level changed from notice to debug 00:07:14.482 EAL: Detected lcore 0 as core 0 on socket 0 00:07:14.482 EAL: Detected lcore 1 as core 1 on socket 0 00:07:14.482 EAL: Detected lcore 2 as core 2 on socket 0 00:07:14.482 EAL: Detected lcore 3 as core 3 on socket 0 00:07:14.482 EAL: Detected lcore 4 as core 4 on socket 0 00:07:14.482 EAL: Detected lcore 5 as core 5 on socket 0 00:07:14.482 EAL: Detected lcore 6 as core 8 on socket 0 00:07:14.482 EAL: Detected lcore 7 as core 9 on socket 0 00:07:14.482 EAL: Detected lcore 8 as core 10 on socket 0 00:07:14.482 EAL: Detected lcore 9 as core 11 on socket 0 00:07:14.482 EAL: Detected lcore 10 as core 12 on socket 0 00:07:14.482 EAL: Detected lcore 11 as core 13 on socket 0 00:07:14.482 EAL: Detected lcore 12 as core 0 on socket 1 00:07:14.482 EAL: Detected lcore 13 as core 1 on socket 1 00:07:14.482 EAL: Detected lcore 14 as core 2 on socket 1 00:07:14.482 EAL: Detected lcore 15 as core 3 on socket 1 00:07:14.482 EAL: Detected lcore 16 as core 4 on socket 1 00:07:14.482 EAL: Detected lcore 17 as core 5 on socket 1 00:07:14.482 EAL: Detected lcore 18 as core 8 on socket 1 00:07:14.482 EAL: Detected lcore 19 as core 9 on socket 1 00:07:14.482 EAL: Detected lcore 20 as core 10 on socket 1 00:07:14.482 EAL: Detected lcore 21 as core 11 on socket 1 00:07:14.482 EAL: Detected lcore 22 as core 12 on socket 1 00:07:14.482 EAL: Detected lcore 23 as core 13 on socket 1 00:07:14.482 EAL: Detected lcore 24 as core 0 on socket 0 00:07:14.482 EAL: Detected lcore 25 as core 1 on socket 0 00:07:14.482 EAL: Detected lcore 26 as core 2 on socket 0 00:07:14.482 EAL: Detected lcore 27 as core 3 on socket 0 00:07:14.482 EAL: Detected lcore 28 as core 4 on socket 0 00:07:14.482 EAL: Detected lcore 29 as core 5 on socket 0 00:07:14.482 EAL: Detected lcore 30 as core 8 on socket 0 00:07:14.482 EAL: Detected lcore 31 as core 9 on socket 0 00:07:14.482 EAL: Detected lcore 32 as core 10 on socket 0 00:07:14.482 EAL: Detected lcore 33 as core 11 on socket 0 00:07:14.482 EAL: Detected lcore 34 as core 12 on socket 0 00:07:14.482 EAL: Detected lcore 35 as core 13 on socket 0 00:07:14.482 EAL: Detected lcore 36 as core 0 on socket 1 00:07:14.482 EAL: Detected lcore 37 as core 1 on socket 1 00:07:14.482 EAL: Detected lcore 38 as core 2 on socket 1 00:07:14.482 EAL: Detected lcore 39 as core 3 on socket 1 00:07:14.482 EAL: Detected lcore 40 as core 4 on socket 1 00:07:14.482 EAL: Detected lcore 41 as core 5 on socket 1 00:07:14.482 EAL: Detected lcore 42 as core 8 on socket 1 00:07:14.482 EAL: Detected lcore 43 as core 9 on socket 1 00:07:14.482 EAL: Detected lcore 44 as core 10 on socket 1 00:07:14.482 EAL: Detected lcore 45 as core 11 on socket 1 00:07:14.482 EAL: Detected lcore 46 as core 12 on socket 1 00:07:14.482 EAL: Detected lcore 47 as core 13 on socket 1 00:07:14.482 EAL: Maximum logical cores by configuration: 128 00:07:14.482 EAL: Detected CPU lcores: 48 00:07:14.483 EAL: Detected NUMA nodes: 2 00:07:14.483 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:14.483 EAL: Detected shared linkage of DPDK 00:07:14.483 EAL: No shared files mode enabled, IPC will be disabled 00:07:14.483 EAL: Bus pci wants IOVA as 'DC' 00:07:14.483 EAL: Buses did not request a specific IOVA mode. 00:07:14.483 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:14.483 EAL: Selected IOVA mode 'VA' 00:07:14.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.483 EAL: Probing VFIO support... 00:07:14.483 EAL: IOMMU type 1 (Type 1) is supported 00:07:14.483 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:14.483 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:14.483 EAL: VFIO support initialized 00:07:14.483 EAL: Ask a virtual area of 0x2e000 bytes 00:07:14.483 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:14.483 EAL: Setting up physically contiguous memory... 00:07:14.483 EAL: Setting maximum number of open files to 524288 00:07:14.483 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:14.483 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:14.483 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:14.483 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.483 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:14.483 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.483 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.483 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:14.483 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:14.483 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.483 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:14.483 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.483 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.483 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:14.483 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:14.483 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.483 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:14.483 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.483 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.483 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:14.483 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:14.483 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.483 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:14.483 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.483 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.483 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:14.483 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:14.483 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:14.483 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.483 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:14.483 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:14.483 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.483 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:14.483 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:14.483 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.483 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:14.483 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:14.483 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.483 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:14.483 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:14.483 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.483 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:14.483 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:14.483 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.483 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:14.483 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:14.483 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.483 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:14.483 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:14.483 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.483 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:14.483 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:14.483 EAL: Hugepages will be freed exactly as allocated. 00:07:14.483 EAL: No shared files mode enabled, IPC is disabled 00:07:14.483 EAL: No shared files mode enabled, IPC is disabled 00:07:14.483 EAL: TSC frequency is ~2700000 KHz 00:07:14.483 EAL: Main lcore 0 is ready (tid=7fcf12294a00;cpuset=[0]) 00:07:14.483 EAL: Trying to obtain current memory policy. 00:07:14.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.483 EAL: Restoring previous memory policy: 0 00:07:14.483 EAL: request: mp_malloc_sync 00:07:14.483 EAL: No shared files mode enabled, IPC is disabled 00:07:14.483 EAL: Heap on socket 0 was expanded by 2MB 00:07:14.483 EAL: No shared files mode enabled, IPC is disabled 00:07:14.483 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:14.483 EAL: Mem event callback 'spdk:(nil)' registered 00:07:14.483 00:07:14.483 00:07:14.483 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.483 http://cunit.sourceforge.net/ 00:07:14.483 00:07:14.483 00:07:14.483 Suite: components_suite 00:07:14.483 Test: vtophys_malloc_test ...passed 00:07:14.483 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:14.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.742 EAL: Restoring previous memory policy: 4 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was expanded by 4MB 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was shrunk by 4MB 00:07:14.742 EAL: Trying to obtain current memory policy. 00:07:14.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.742 EAL: Restoring previous memory policy: 4 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was expanded by 6MB 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was shrunk by 6MB 00:07:14.742 EAL: Trying to obtain current memory policy. 00:07:14.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.742 EAL: Restoring previous memory policy: 4 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was expanded by 10MB 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was shrunk by 10MB 00:07:14.742 EAL: Trying to obtain current memory policy. 00:07:14.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.742 EAL: Restoring previous memory policy: 4 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was expanded by 18MB 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was shrunk by 18MB 00:07:14.742 EAL: Trying to obtain current memory policy. 00:07:14.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.742 EAL: Restoring previous memory policy: 4 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was expanded by 34MB 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was shrunk by 34MB 00:07:14.742 EAL: Trying to obtain current memory policy. 00:07:14.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.742 EAL: Restoring previous memory policy: 4 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was expanded by 66MB 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was shrunk by 66MB 00:07:14.742 EAL: Trying to obtain current memory policy. 00:07:14.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.742 EAL: Restoring previous memory policy: 4 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was expanded by 130MB 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was shrunk by 130MB 00:07:14.742 EAL: Trying to obtain current memory policy. 00:07:14.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.742 EAL: Restoring previous memory policy: 4 00:07:14.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.742 EAL: request: mp_malloc_sync 00:07:14.742 EAL: No shared files mode enabled, IPC is disabled 00:07:14.742 EAL: Heap on socket 0 was expanded by 258MB 00:07:15.000 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.000 EAL: request: mp_malloc_sync 00:07:15.000 EAL: No shared files mode enabled, IPC is disabled 00:07:15.000 EAL: Heap on socket 0 was shrunk by 258MB 00:07:15.000 EAL: Trying to obtain current memory policy. 00:07:15.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.000 EAL: Restoring previous memory policy: 4 00:07:15.000 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.000 EAL: request: mp_malloc_sync 00:07:15.000 EAL: No shared files mode enabled, IPC is disabled 00:07:15.000 EAL: Heap on socket 0 was expanded by 514MB 00:07:15.258 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.258 EAL: request: mp_malloc_sync 00:07:15.258 EAL: No shared files mode enabled, IPC is disabled 00:07:15.258 EAL: Heap on socket 0 was shrunk by 514MB 00:07:15.258 EAL: Trying to obtain current memory policy. 00:07:15.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.516 EAL: Restoring previous memory policy: 4 00:07:15.516 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.516 EAL: request: mp_malloc_sync 00:07:15.516 EAL: No shared files mode enabled, IPC is disabled 00:07:15.516 EAL: Heap on socket 0 was expanded by 1026MB 00:07:15.773 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.032 EAL: request: mp_malloc_sync 00:07:16.032 EAL: No shared files mode enabled, IPC is disabled 00:07:16.032 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:16.032 passed 00:07:16.032 00:07:16.032 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.032 suites 1 1 n/a 0 0 00:07:16.032 tests 2 2 2 0 0 00:07:16.032 asserts 497 497 497 0 n/a 00:07:16.032 00:07:16.032 Elapsed time = 1.455 seconds 00:07:16.032 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.032 EAL: request: mp_malloc_sync 00:07:16.032 EAL: No shared files mode enabled, IPC is disabled 00:07:16.032 EAL: Heap on socket 0 was shrunk by 2MB 00:07:16.032 EAL: No shared files mode enabled, IPC is disabled 00:07:16.032 EAL: No shared files mode enabled, IPC is disabled 00:07:16.032 EAL: No shared files mode enabled, IPC is disabled 00:07:16.032 00:07:16.032 real 0m1.618s 00:07:16.032 user 0m0.935s 00:07:16.032 sys 0m0.644s 00:07:16.032 14:01:32 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.032 14:01:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:16.032 ************************************ 00:07:16.032 END TEST env_vtophys 00:07:16.032 ************************************ 00:07:16.032 14:01:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:16.032 14:01:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.032 14:01:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.032 14:01:32 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.290 ************************************ 00:07:16.290 START TEST env_pci 00:07:16.290 ************************************ 00:07:16.290 14:01:32 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:16.290 00:07:16.290 00:07:16.290 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.290 http://cunit.sourceforge.net/ 00:07:16.290 00:07:16.290 00:07:16.290 Suite: pci 00:07:16.290 Test: pci_hook ...[2024-07-26 14:01:32.935261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2397497 has claimed it 00:07:16.290 EAL: Cannot find device (10000:00:01.0) 00:07:16.290 EAL: Failed to attach device on primary process 00:07:16.290 passed 00:07:16.290 00:07:16.290 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.290 suites 1 1 n/a 0 0 00:07:16.290 tests 1 1 1 0 0 00:07:16.290 asserts 25 25 25 0 n/a 00:07:16.290 00:07:16.290 Elapsed time = 0.032 seconds 00:07:16.290 00:07:16.290 real 0m0.053s 00:07:16.290 user 0m0.012s 00:07:16.290 sys 0m0.040s 00:07:16.290 14:01:32 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.290 14:01:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:16.290 ************************************ 00:07:16.290 END TEST env_pci 00:07:16.290 ************************************ 00:07:16.290 14:01:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:16.290 14:01:32 env -- env/env.sh@15 -- # uname 00:07:16.290 14:01:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:16.290 14:01:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:16.290 14:01:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:16.290 14:01:33 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.290 14:01:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.290 14:01:33 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.290 ************************************ 00:07:16.290 START TEST env_dpdk_post_init 00:07:16.290 ************************************ 00:07:16.290 14:01:33 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:16.290 EAL: Detected CPU lcores: 48 00:07:16.290 EAL: Detected NUMA nodes: 2 00:07:16.290 EAL: Detected shared linkage of DPDK 00:07:16.290 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:16.290 EAL: Selected IOVA mode 'VA' 00:07:16.290 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.290 EAL: VFIO support initialized 00:07:16.290 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:16.548 EAL: Using IOMMU type 1 (Type 1) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:07:16.548 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:17.485 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:07:20.766 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:07:20.766 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:07:20.766 Starting DPDK initialization... 00:07:20.766 Starting SPDK post initialization... 00:07:20.766 SPDK NVMe probe 00:07:20.766 Attaching to 0000:82:00.0 00:07:20.766 Attached to 0000:82:00.0 00:07:20.766 Cleaning up... 00:07:20.766 00:07:20.766 real 0m4.479s 00:07:20.766 user 0m3.297s 00:07:20.766 sys 0m0.234s 00:07:20.766 14:01:37 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.766 14:01:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:20.766 ************************************ 00:07:20.766 END TEST env_dpdk_post_init 00:07:20.766 ************************************ 00:07:20.766 14:01:37 env -- env/env.sh@26 -- # uname 00:07:20.766 14:01:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:20.766 14:01:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:20.766 14:01:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.766 14:01:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.766 14:01:37 env -- common/autotest_common.sh@10 -- # set +x 00:07:20.766 ************************************ 00:07:20.766 START TEST env_mem_callbacks 00:07:20.766 ************************************ 00:07:20.766 14:01:37 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:20.766 EAL: Detected CPU lcores: 48 00:07:20.766 EAL: Detected NUMA nodes: 2 00:07:20.766 EAL: Detected shared linkage of DPDK 00:07:20.766 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:20.766 EAL: Selected IOVA mode 'VA' 00:07:20.766 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.766 EAL: VFIO support initialized 00:07:20.766 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:20.766 00:07:20.766 00:07:20.766 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.766 http://cunit.sourceforge.net/ 00:07:20.766 00:07:20.766 00:07:20.766 Suite: memory 00:07:20.766 Test: test ... 00:07:20.766 register 0x200000200000 2097152 00:07:20.766 malloc 3145728 00:07:20.766 register 0x200000400000 4194304 00:07:20.766 buf 0x200000500000 len 3145728 PASSED 00:07:20.766 malloc 64 00:07:20.766 buf 0x2000004fff40 len 64 PASSED 00:07:20.766 malloc 4194304 00:07:20.766 register 0x200000800000 6291456 00:07:20.766 buf 0x200000a00000 len 4194304 PASSED 00:07:20.766 free 0x200000500000 3145728 00:07:20.766 free 0x2000004fff40 64 00:07:20.766 unregister 0x200000400000 4194304 PASSED 00:07:20.766 free 0x200000a00000 4194304 00:07:20.766 unregister 0x200000800000 6291456 PASSED 00:07:20.766 malloc 8388608 00:07:20.766 register 0x200000400000 10485760 00:07:20.766 buf 0x200000600000 len 8388608 PASSED 00:07:20.766 free 0x200000600000 8388608 00:07:20.766 unregister 0x200000400000 10485760 PASSED 00:07:20.766 passed 00:07:20.766 00:07:20.766 Run Summary: Type Total Ran Passed Failed Inactive 00:07:20.766 suites 1 1 n/a 0 0 00:07:20.766 tests 1 1 1 0 0 00:07:20.766 asserts 15 15 15 0 n/a 00:07:20.766 00:07:20.766 Elapsed time = 0.005 seconds 00:07:20.766 00:07:20.766 real 0m0.058s 00:07:20.766 user 0m0.016s 00:07:20.766 sys 0m0.042s 00:07:20.766 14:01:37 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.766 14:01:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:20.767 ************************************ 00:07:20.767 END TEST env_mem_callbacks 00:07:20.767 ************************************ 00:07:21.025 00:07:21.025 real 0m6.951s 00:07:21.025 user 0m4.776s 00:07:21.025 sys 0m1.210s 00:07:21.025 14:01:37 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.025 14:01:37 env -- common/autotest_common.sh@10 -- # set +x 00:07:21.025 ************************************ 00:07:21.025 END TEST env 00:07:21.025 ************************************ 00:07:21.025 14:01:37 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:21.025 14:01:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.025 14:01:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.025 14:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:21.025 ************************************ 00:07:21.025 START TEST rpc 00:07:21.025 ************************************ 00:07:21.025 14:01:37 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:21.025 * Looking for test storage... 00:07:21.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:21.025 14:01:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2398154 00:07:21.025 14:01:37 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:21.025 14:01:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:21.025 14:01:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2398154 00:07:21.025 14:01:37 rpc -- common/autotest_common.sh@831 -- # '[' -z 2398154 ']' 00:07:21.025 14:01:37 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.025 14:01:37 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.025 14:01:37 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.025 14:01:37 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.025 14:01:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.025 [2024-07-26 14:01:37.886678] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:21.025 [2024-07-26 14:01:37.886850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398154 ] 00:07:21.284 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.284 [2024-07-26 14:01:37.978463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.284 [2024-07-26 14:01:38.103095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:21.284 [2024-07-26 14:01:38.103163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2398154' to capture a snapshot of events at runtime. 00:07:21.284 [2024-07-26 14:01:38.103180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.284 [2024-07-26 14:01:38.103194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.284 [2024-07-26 14:01:38.103206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2398154 for offline analysis/debug. 00:07:21.284 [2024-07-26 14:01:38.103241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.543 14:01:38 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.543 14:01:38 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:21.543 14:01:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:21.543 14:01:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:21.543 14:01:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:21.543 14:01:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:21.543 14:01:38 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.543 14:01:38 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.543 14:01:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.543 ************************************ 00:07:21.543 START TEST rpc_integrity 00:07:21.543 ************************************ 00:07:21.543 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:21.543 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:21.543 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.543 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.543 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.543 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:21.543 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:21.801 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:21.801 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:21.801 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.801 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.801 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.801 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:21.801 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:21.801 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.801 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.801 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.801 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:21.801 { 00:07:21.801 "name": "Malloc0", 00:07:21.801 "aliases": [ 00:07:21.801 "cc8c03a5-3d02-4988-8795-7ac2d5cefbef" 00:07:21.801 ], 00:07:21.801 "product_name": "Malloc disk", 00:07:21.801 "block_size": 512, 00:07:21.801 "num_blocks": 16384, 00:07:21.801 "uuid": "cc8c03a5-3d02-4988-8795-7ac2d5cefbef", 00:07:21.801 "assigned_rate_limits": { 00:07:21.801 "rw_ios_per_sec": 0, 00:07:21.801 "rw_mbytes_per_sec": 0, 00:07:21.801 "r_mbytes_per_sec": 0, 00:07:21.801 "w_mbytes_per_sec": 0 00:07:21.801 }, 00:07:21.801 "claimed": false, 00:07:21.801 "zoned": false, 00:07:21.801 "supported_io_types": { 00:07:21.801 "read": true, 00:07:21.801 "write": true, 00:07:21.801 "unmap": true, 00:07:21.801 "flush": true, 00:07:21.801 "reset": true, 00:07:21.801 "nvme_admin": false, 00:07:21.801 "nvme_io": false, 00:07:21.801 "nvme_io_md": false, 00:07:21.801 "write_zeroes": true, 00:07:21.801 "zcopy": true, 00:07:21.801 "get_zone_info": false, 00:07:21.801 "zone_management": false, 00:07:21.801 "zone_append": false, 00:07:21.801 "compare": false, 00:07:21.801 "compare_and_write": false, 00:07:21.801 "abort": true, 00:07:21.801 "seek_hole": false, 00:07:21.801 "seek_data": false, 00:07:21.801 "copy": true, 00:07:21.801 "nvme_iov_md": false 00:07:21.801 }, 00:07:21.801 "memory_domains": [ 00:07:21.801 { 00:07:21.801 "dma_device_id": "system", 00:07:21.801 "dma_device_type": 1 00:07:21.801 }, 00:07:21.801 { 00:07:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.801 "dma_device_type": 2 00:07:21.801 } 00:07:21.801 ], 00:07:21.801 "driver_specific": {} 00:07:21.801 } 00:07:21.801 ]' 00:07:21.801 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:21.801 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 [2024-07-26 14:01:38.519200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:21.802 [2024-07-26 14:01:38.519245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.802 [2024-07-26 14:01:38.519270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1feb3e0 00:07:21.802 [2024-07-26 14:01:38.519286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.802 [2024-07-26 14:01:38.520774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.802 [2024-07-26 14:01:38.520802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:21.802 Passthru0 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:21.802 { 00:07:21.802 "name": "Malloc0", 00:07:21.802 "aliases": [ 00:07:21.802 "cc8c03a5-3d02-4988-8795-7ac2d5cefbef" 00:07:21.802 ], 00:07:21.802 "product_name": "Malloc disk", 00:07:21.802 "block_size": 512, 00:07:21.802 "num_blocks": 16384, 00:07:21.802 "uuid": "cc8c03a5-3d02-4988-8795-7ac2d5cefbef", 00:07:21.802 "assigned_rate_limits": { 00:07:21.802 "rw_ios_per_sec": 0, 00:07:21.802 "rw_mbytes_per_sec": 0, 00:07:21.802 "r_mbytes_per_sec": 0, 00:07:21.802 "w_mbytes_per_sec": 0 00:07:21.802 }, 00:07:21.802 "claimed": true, 00:07:21.802 "claim_type": "exclusive_write", 00:07:21.802 "zoned": false, 00:07:21.802 "supported_io_types": { 00:07:21.802 "read": true, 00:07:21.802 "write": true, 00:07:21.802 "unmap": true, 00:07:21.802 "flush": true, 00:07:21.802 "reset": true, 00:07:21.802 "nvme_admin": false, 00:07:21.802 "nvme_io": false, 00:07:21.802 "nvme_io_md": false, 00:07:21.802 "write_zeroes": true, 00:07:21.802 "zcopy": true, 00:07:21.802 "get_zone_info": false, 00:07:21.802 "zone_management": false, 00:07:21.802 "zone_append": false, 00:07:21.802 "compare": false, 00:07:21.802 "compare_and_write": false, 00:07:21.802 "abort": true, 00:07:21.802 "seek_hole": false, 00:07:21.802 "seek_data": false, 00:07:21.802 "copy": true, 00:07:21.802 "nvme_iov_md": false 00:07:21.802 }, 00:07:21.802 "memory_domains": [ 00:07:21.802 { 00:07:21.802 "dma_device_id": "system", 00:07:21.802 "dma_device_type": 1 00:07:21.802 }, 00:07:21.802 { 00:07:21.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.802 "dma_device_type": 2 00:07:21.802 } 00:07:21.802 ], 00:07:21.802 "driver_specific": {} 00:07:21.802 }, 00:07:21.802 { 00:07:21.802 "name": "Passthru0", 00:07:21.802 "aliases": [ 00:07:21.802 "1516fd8a-c86d-509f-8e92-705c0c48504b" 00:07:21.802 ], 00:07:21.802 "product_name": "passthru", 00:07:21.802 "block_size": 512, 00:07:21.802 "num_blocks": 16384, 00:07:21.802 "uuid": "1516fd8a-c86d-509f-8e92-705c0c48504b", 00:07:21.802 "assigned_rate_limits": { 00:07:21.802 "rw_ios_per_sec": 0, 00:07:21.802 "rw_mbytes_per_sec": 0, 00:07:21.802 "r_mbytes_per_sec": 0, 00:07:21.802 "w_mbytes_per_sec": 0 00:07:21.802 }, 00:07:21.802 "claimed": false, 00:07:21.802 "zoned": false, 00:07:21.802 "supported_io_types": { 00:07:21.802 "read": true, 00:07:21.802 "write": true, 00:07:21.802 "unmap": true, 00:07:21.802 "flush": true, 00:07:21.802 "reset": true, 00:07:21.802 "nvme_admin": false, 00:07:21.802 "nvme_io": false, 00:07:21.802 "nvme_io_md": false, 00:07:21.802 "write_zeroes": true, 00:07:21.802 "zcopy": true, 00:07:21.802 "get_zone_info": false, 00:07:21.802 "zone_management": false, 00:07:21.802 "zone_append": false, 00:07:21.802 "compare": false, 00:07:21.802 "compare_and_write": false, 00:07:21.802 "abort": true, 00:07:21.802 "seek_hole": false, 00:07:21.802 "seek_data": false, 00:07:21.802 "copy": true, 00:07:21.802 "nvme_iov_md": false 00:07:21.802 }, 00:07:21.802 "memory_domains": [ 00:07:21.802 { 00:07:21.802 "dma_device_id": "system", 00:07:21.802 "dma_device_type": 1 00:07:21.802 }, 00:07:21.802 { 00:07:21.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.802 "dma_device_type": 2 00:07:21.802 } 00:07:21.802 ], 00:07:21.802 "driver_specific": { 00:07:21.802 "passthru": { 00:07:21.802 "name": "Passthru0", 00:07:21.802 "base_bdev_name": "Malloc0" 00:07:21.802 } 00:07:21.802 } 00:07:21.802 } 00:07:21.802 ]' 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:21.802 14:01:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:21.802 00:07:21.802 real 0m0.241s 00:07:21.802 user 0m0.159s 00:07:21.802 sys 0m0.024s 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.802 14:01:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 ************************************ 00:07:21.802 END TEST rpc_integrity 00:07:21.802 ************************************ 00:07:21.802 14:01:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:21.802 14:01:38 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.802 14:01:38 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.802 14:01:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.061 ************************************ 00:07:22.061 START TEST rpc_plugins 00:07:22.061 ************************************ 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:22.061 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.061 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:22.061 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.061 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:22.061 { 00:07:22.061 "name": "Malloc1", 00:07:22.061 "aliases": [ 00:07:22.061 "26303936-63f3-4516-a088-553c69c562dd" 00:07:22.061 ], 00:07:22.061 "product_name": "Malloc disk", 00:07:22.061 "block_size": 4096, 00:07:22.061 "num_blocks": 256, 00:07:22.061 "uuid": "26303936-63f3-4516-a088-553c69c562dd", 00:07:22.061 "assigned_rate_limits": { 00:07:22.061 "rw_ios_per_sec": 0, 00:07:22.061 "rw_mbytes_per_sec": 0, 00:07:22.061 "r_mbytes_per_sec": 0, 00:07:22.061 "w_mbytes_per_sec": 0 00:07:22.061 }, 00:07:22.061 "claimed": false, 00:07:22.061 "zoned": false, 00:07:22.061 "supported_io_types": { 00:07:22.061 "read": true, 00:07:22.061 "write": true, 00:07:22.061 "unmap": true, 00:07:22.061 "flush": true, 00:07:22.061 "reset": true, 00:07:22.061 "nvme_admin": false, 00:07:22.061 "nvme_io": false, 00:07:22.061 "nvme_io_md": false, 00:07:22.061 "write_zeroes": true, 00:07:22.061 "zcopy": true, 00:07:22.061 "get_zone_info": false, 00:07:22.061 "zone_management": false, 00:07:22.061 "zone_append": false, 00:07:22.061 "compare": false, 00:07:22.061 "compare_and_write": false, 00:07:22.061 "abort": true, 00:07:22.061 "seek_hole": false, 00:07:22.061 "seek_data": false, 00:07:22.061 "copy": true, 00:07:22.061 "nvme_iov_md": false 00:07:22.061 }, 00:07:22.061 "memory_domains": [ 00:07:22.061 { 00:07:22.061 "dma_device_id": "system", 00:07:22.061 "dma_device_type": 1 00:07:22.061 }, 00:07:22.061 { 00:07:22.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.061 "dma_device_type": 2 00:07:22.061 } 00:07:22.061 ], 00:07:22.061 "driver_specific": {} 00:07:22.061 } 00:07:22.061 ]' 00:07:22.061 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:22.061 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:22.061 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:22.061 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.062 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:22.062 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.062 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:22.062 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.062 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:22.062 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:22.062 14:01:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:22.062 00:07:22.062 real 0m0.120s 00:07:22.062 user 0m0.079s 00:07:22.062 sys 0m0.011s 00:07:22.062 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.062 14:01:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:22.062 ************************************ 00:07:22.062 END TEST rpc_plugins 00:07:22.062 ************************************ 00:07:22.062 14:01:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:22.062 14:01:38 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.062 14:01:38 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.062 14:01:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.062 ************************************ 00:07:22.062 START TEST rpc_trace_cmd_test 00:07:22.062 ************************************ 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:22.062 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2398154", 00:07:22.062 "tpoint_group_mask": "0x8", 00:07:22.062 "iscsi_conn": { 00:07:22.062 "mask": "0x2", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "scsi": { 00:07:22.062 "mask": "0x4", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "bdev": { 00:07:22.062 "mask": "0x8", 00:07:22.062 "tpoint_mask": "0xffffffffffffffff" 00:07:22.062 }, 00:07:22.062 "nvmf_rdma": { 00:07:22.062 "mask": "0x10", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "nvmf_tcp": { 00:07:22.062 "mask": "0x20", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "ftl": { 00:07:22.062 "mask": "0x40", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "blobfs": { 00:07:22.062 "mask": "0x80", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "dsa": { 00:07:22.062 "mask": "0x200", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "thread": { 00:07:22.062 "mask": "0x400", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "nvme_pcie": { 00:07:22.062 "mask": "0x800", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "iaa": { 00:07:22.062 "mask": "0x1000", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "nvme_tcp": { 00:07:22.062 "mask": "0x2000", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "bdev_nvme": { 00:07:22.062 "mask": "0x4000", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 }, 00:07:22.062 "sock": { 00:07:22.062 "mask": "0x8000", 00:07:22.062 "tpoint_mask": "0x0" 00:07:22.062 } 00:07:22.062 }' 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:22.062 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:22.320 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:22.320 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:22.320 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:22.320 14:01:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:22.320 14:01:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:22.320 14:01:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:22.320 14:01:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:22.320 00:07:22.320 real 0m0.282s 00:07:22.320 user 0m0.251s 00:07:22.320 sys 0m0.022s 00:07:22.320 14:01:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.320 14:01:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.320 ************************************ 00:07:22.320 END TEST rpc_trace_cmd_test 00:07:22.320 ************************************ 00:07:22.320 14:01:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:22.320 14:01:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:22.320 14:01:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:22.320 14:01:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.320 14:01:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.320 14:01:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.320 ************************************ 00:07:22.320 START TEST rpc_daemon_integrity 00:07:22.320 ************************************ 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.578 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:22.578 { 00:07:22.579 "name": "Malloc2", 00:07:22.579 "aliases": [ 00:07:22.579 "4d57d2fb-a40f-4507-8116-4ecd1f650839" 00:07:22.579 ], 00:07:22.579 "product_name": "Malloc disk", 00:07:22.579 "block_size": 512, 00:07:22.579 "num_blocks": 16384, 00:07:22.579 "uuid": "4d57d2fb-a40f-4507-8116-4ecd1f650839", 00:07:22.579 "assigned_rate_limits": { 00:07:22.579 "rw_ios_per_sec": 0, 00:07:22.579 "rw_mbytes_per_sec": 0, 00:07:22.579 "r_mbytes_per_sec": 0, 00:07:22.579 "w_mbytes_per_sec": 0 00:07:22.579 }, 00:07:22.579 "claimed": false, 00:07:22.579 "zoned": false, 00:07:22.579 "supported_io_types": { 00:07:22.579 "read": true, 00:07:22.579 "write": true, 00:07:22.579 "unmap": true, 00:07:22.579 "flush": true, 00:07:22.579 "reset": true, 00:07:22.579 "nvme_admin": false, 00:07:22.579 "nvme_io": false, 00:07:22.579 "nvme_io_md": false, 00:07:22.579 "write_zeroes": true, 00:07:22.579 "zcopy": true, 00:07:22.579 "get_zone_info": false, 00:07:22.579 "zone_management": false, 00:07:22.579 "zone_append": false, 00:07:22.579 "compare": false, 00:07:22.579 "compare_and_write": false, 00:07:22.579 "abort": true, 00:07:22.579 "seek_hole": false, 00:07:22.579 "seek_data": false, 00:07:22.579 "copy": true, 00:07:22.579 "nvme_iov_md": false 00:07:22.579 }, 00:07:22.579 "memory_domains": [ 00:07:22.579 { 00:07:22.579 "dma_device_id": "system", 00:07:22.579 "dma_device_type": 1 00:07:22.579 }, 00:07:22.579 { 00:07:22.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.579 "dma_device_type": 2 00:07:22.579 } 00:07:22.579 ], 00:07:22.579 "driver_specific": {} 00:07:22.579 } 00:07:22.579 ]' 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.579 [2024-07-26 14:01:39.321748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:22.579 [2024-07-26 14:01:39.321792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.579 [2024-07-26 14:01:39.321831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1feb610 00:07:22.579 [2024-07-26 14:01:39.321848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.579 [2024-07-26 14:01:39.323200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.579 [2024-07-26 14:01:39.323229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:22.579 Passthru0 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:22.579 { 00:07:22.579 "name": "Malloc2", 00:07:22.579 "aliases": [ 00:07:22.579 "4d57d2fb-a40f-4507-8116-4ecd1f650839" 00:07:22.579 ], 00:07:22.579 "product_name": "Malloc disk", 00:07:22.579 "block_size": 512, 00:07:22.579 "num_blocks": 16384, 00:07:22.579 "uuid": "4d57d2fb-a40f-4507-8116-4ecd1f650839", 00:07:22.579 "assigned_rate_limits": { 00:07:22.579 "rw_ios_per_sec": 0, 00:07:22.579 "rw_mbytes_per_sec": 0, 00:07:22.579 "r_mbytes_per_sec": 0, 00:07:22.579 "w_mbytes_per_sec": 0 00:07:22.579 }, 00:07:22.579 "claimed": true, 00:07:22.579 "claim_type": "exclusive_write", 00:07:22.579 "zoned": false, 00:07:22.579 "supported_io_types": { 00:07:22.579 "read": true, 00:07:22.579 "write": true, 00:07:22.579 "unmap": true, 00:07:22.579 "flush": true, 00:07:22.579 "reset": true, 00:07:22.579 "nvme_admin": false, 00:07:22.579 "nvme_io": false, 00:07:22.579 "nvme_io_md": false, 00:07:22.579 "write_zeroes": true, 00:07:22.579 "zcopy": true, 00:07:22.579 "get_zone_info": false, 00:07:22.579 "zone_management": false, 00:07:22.579 "zone_append": false, 00:07:22.579 "compare": false, 00:07:22.579 "compare_and_write": false, 00:07:22.579 "abort": true, 00:07:22.579 "seek_hole": false, 00:07:22.579 "seek_data": false, 00:07:22.579 "copy": true, 00:07:22.579 "nvme_iov_md": false 00:07:22.579 }, 00:07:22.579 "memory_domains": [ 00:07:22.579 { 00:07:22.579 "dma_device_id": "system", 00:07:22.579 "dma_device_type": 1 00:07:22.579 }, 00:07:22.579 { 00:07:22.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.579 "dma_device_type": 2 00:07:22.579 } 00:07:22.579 ], 00:07:22.579 "driver_specific": {} 00:07:22.579 }, 00:07:22.579 { 00:07:22.579 "name": "Passthru0", 00:07:22.579 "aliases": [ 00:07:22.579 "013da4e3-ae04-56e9-90f1-df626d5af635" 00:07:22.579 ], 00:07:22.579 "product_name": "passthru", 00:07:22.579 "block_size": 512, 00:07:22.579 "num_blocks": 16384, 00:07:22.579 "uuid": "013da4e3-ae04-56e9-90f1-df626d5af635", 00:07:22.579 "assigned_rate_limits": { 00:07:22.579 "rw_ios_per_sec": 0, 00:07:22.579 "rw_mbytes_per_sec": 0, 00:07:22.579 "r_mbytes_per_sec": 0, 00:07:22.579 "w_mbytes_per_sec": 0 00:07:22.579 }, 00:07:22.579 "claimed": false, 00:07:22.579 "zoned": false, 00:07:22.579 "supported_io_types": { 00:07:22.579 "read": true, 00:07:22.579 "write": true, 00:07:22.579 "unmap": true, 00:07:22.579 "flush": true, 00:07:22.579 "reset": true, 00:07:22.579 "nvme_admin": false, 00:07:22.579 "nvme_io": false, 00:07:22.579 "nvme_io_md": false, 00:07:22.579 "write_zeroes": true, 00:07:22.579 "zcopy": true, 00:07:22.579 "get_zone_info": false, 00:07:22.579 "zone_management": false, 00:07:22.579 "zone_append": false, 00:07:22.579 "compare": false, 00:07:22.579 "compare_and_write": false, 00:07:22.579 "abort": true, 00:07:22.579 "seek_hole": false, 00:07:22.579 "seek_data": false, 00:07:22.579 "copy": true, 00:07:22.579 "nvme_iov_md": false 00:07:22.579 }, 00:07:22.579 "memory_domains": [ 00:07:22.579 { 00:07:22.579 "dma_device_id": "system", 00:07:22.579 "dma_device_type": 1 00:07:22.579 }, 00:07:22.579 { 00:07:22.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.579 "dma_device_type": 2 00:07:22.579 } 00:07:22.579 ], 00:07:22.579 "driver_specific": { 00:07:22.579 "passthru": { 00:07:22.579 "name": "Passthru0", 00:07:22.579 "base_bdev_name": "Malloc2" 00:07:22.579 } 00:07:22.579 } 00:07:22.579 } 00:07:22.579 ]' 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:22.579 00:07:22.579 real 0m0.239s 00:07:22.579 user 0m0.160s 00:07:22.579 sys 0m0.025s 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.579 14:01:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.579 ************************************ 00:07:22.579 END TEST rpc_daemon_integrity 00:07:22.579 ************************************ 00:07:22.838 14:01:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:22.838 14:01:39 rpc -- rpc/rpc.sh@84 -- # killprocess 2398154 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@950 -- # '[' -z 2398154 ']' 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@954 -- # kill -0 2398154 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@955 -- # uname 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2398154 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2398154' 00:07:22.838 killing process with pid 2398154 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@969 -- # kill 2398154 00:07:22.838 14:01:39 rpc -- common/autotest_common.sh@974 -- # wait 2398154 00:07:23.406 00:07:23.406 real 0m2.269s 00:07:23.406 user 0m2.911s 00:07:23.406 sys 0m0.692s 00:07:23.406 14:01:39 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.406 14:01:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.406 ************************************ 00:07:23.406 END TEST rpc 00:07:23.406 ************************************ 00:07:23.406 14:01:40 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:23.406 14:01:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.406 14:01:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.406 14:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:23.406 ************************************ 00:07:23.406 START TEST skip_rpc 00:07:23.406 ************************************ 00:07:23.406 14:01:40 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:23.406 * Looking for test storage... 00:07:23.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:23.406 14:01:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:23.406 14:01:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:23.406 14:01:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:23.406 14:01:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.406 14:01:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.406 14:01:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.406 ************************************ 00:07:23.406 START TEST skip_rpc 00:07:23.406 ************************************ 00:07:23.406 14:01:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:23.406 14:01:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2398593 00:07:23.406 14:01:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:23.406 14:01:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.406 14:01:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:23.406 [2024-07-26 14:01:40.232694] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:23.406 [2024-07-26 14:01:40.232782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398593 ] 00:07:23.406 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.665 [2024-07-26 14:01:40.301805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.665 [2024-07-26 14:01:40.427621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2398593 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2398593 ']' 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2398593 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2398593 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2398593' 00:07:28.929 killing process with pid 2398593 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2398593 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2398593 00:07:28.929 00:07:28.929 real 0m5.529s 00:07:28.929 user 0m5.181s 00:07:28.929 sys 0m0.359s 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.929 14:01:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.929 ************************************ 00:07:28.929 END TEST skip_rpc 00:07:28.929 ************************************ 00:07:28.929 14:01:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:28.929 14:01:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.929 14:01:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.929 14:01:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.929 ************************************ 00:07:28.929 START TEST skip_rpc_with_json 00:07:28.929 ************************************ 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2399280 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2399280 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2399280 ']' 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.929 14:01:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.187 [2024-07-26 14:01:45.829105] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:29.187 [2024-07-26 14:01:45.829208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399280 ] 00:07:29.187 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.187 [2024-07-26 14:01:45.897333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.187 [2024-07-26 14:01:46.017833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.446 [2024-07-26 14:01:46.294528] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:29.446 request: 00:07:29.446 { 00:07:29.446 "trtype": "tcp", 00:07:29.446 "method": "nvmf_get_transports", 00:07:29.446 "req_id": 1 00:07:29.446 } 00:07:29.446 Got JSON-RPC error response 00:07:29.446 response: 00:07:29.446 { 00:07:29.446 "code": -19, 00:07:29.446 "message": "No such device" 00:07:29.446 } 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.446 [2024-07-26 14:01:46.302640] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.446 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.704 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.704 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:29.704 { 00:07:29.704 "subsystems": [ 00:07:29.704 { 00:07:29.704 "subsystem": "vfio_user_target", 00:07:29.704 "config": null 00:07:29.704 }, 00:07:29.704 { 00:07:29.704 "subsystem": "keyring", 00:07:29.704 "config": [] 00:07:29.704 }, 00:07:29.704 { 00:07:29.704 "subsystem": "iobuf", 00:07:29.704 "config": [ 00:07:29.704 { 00:07:29.704 "method": "iobuf_set_options", 00:07:29.704 "params": { 00:07:29.704 "small_pool_count": 8192, 00:07:29.704 "large_pool_count": 1024, 00:07:29.704 "small_bufsize": 8192, 00:07:29.704 "large_bufsize": 135168 00:07:29.704 } 00:07:29.704 } 00:07:29.704 ] 00:07:29.704 }, 00:07:29.704 { 00:07:29.704 "subsystem": "sock", 00:07:29.704 "config": [ 00:07:29.704 { 00:07:29.704 "method": "sock_set_default_impl", 00:07:29.704 "params": { 00:07:29.704 "impl_name": "posix" 00:07:29.704 } 00:07:29.704 }, 00:07:29.704 { 00:07:29.704 "method": "sock_impl_set_options", 00:07:29.704 "params": { 00:07:29.704 "impl_name": "ssl", 00:07:29.704 "recv_buf_size": 4096, 00:07:29.704 "send_buf_size": 4096, 00:07:29.704 "enable_recv_pipe": true, 00:07:29.704 "enable_quickack": false, 00:07:29.704 "enable_placement_id": 0, 00:07:29.704 "enable_zerocopy_send_server": true, 00:07:29.704 "enable_zerocopy_send_client": false, 00:07:29.704 "zerocopy_threshold": 0, 00:07:29.704 "tls_version": 0, 00:07:29.704 "enable_ktls": false 00:07:29.704 } 00:07:29.704 }, 00:07:29.704 { 00:07:29.704 "method": "sock_impl_set_options", 00:07:29.704 "params": { 00:07:29.704 "impl_name": "posix", 00:07:29.704 "recv_buf_size": 2097152, 00:07:29.704 "send_buf_size": 2097152, 00:07:29.704 "enable_recv_pipe": true, 00:07:29.704 "enable_quickack": false, 00:07:29.704 "enable_placement_id": 0, 00:07:29.704 "enable_zerocopy_send_server": true, 00:07:29.704 "enable_zerocopy_send_client": false, 00:07:29.705 "zerocopy_threshold": 0, 00:07:29.705 "tls_version": 0, 00:07:29.705 "enable_ktls": false 00:07:29.705 } 00:07:29.705 } 00:07:29.705 ] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "vmd", 00:07:29.705 "config": [] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "accel", 00:07:29.705 "config": [ 00:07:29.705 { 00:07:29.705 "method": "accel_set_options", 00:07:29.705 "params": { 00:07:29.705 "small_cache_size": 128, 00:07:29.705 "large_cache_size": 16, 00:07:29.705 "task_count": 2048, 00:07:29.705 "sequence_count": 2048, 00:07:29.705 "buf_count": 2048 00:07:29.705 } 00:07:29.705 } 00:07:29.705 ] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "bdev", 00:07:29.705 "config": [ 00:07:29.705 { 00:07:29.705 "method": "bdev_set_options", 00:07:29.705 "params": { 00:07:29.705 "bdev_io_pool_size": 65535, 00:07:29.705 "bdev_io_cache_size": 256, 00:07:29.705 "bdev_auto_examine": true, 00:07:29.705 "iobuf_small_cache_size": 128, 00:07:29.705 "iobuf_large_cache_size": 16 00:07:29.705 } 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "method": "bdev_raid_set_options", 00:07:29.705 "params": { 00:07:29.705 "process_window_size_kb": 1024, 00:07:29.705 "process_max_bandwidth_mb_sec": 0 00:07:29.705 } 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "method": "bdev_iscsi_set_options", 00:07:29.705 "params": { 00:07:29.705 "timeout_sec": 30 00:07:29.705 } 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "method": "bdev_nvme_set_options", 00:07:29.705 "params": { 00:07:29.705 "action_on_timeout": "none", 00:07:29.705 "timeout_us": 0, 00:07:29.705 "timeout_admin_us": 0, 00:07:29.705 "keep_alive_timeout_ms": 10000, 00:07:29.705 "arbitration_burst": 0, 00:07:29.705 "low_priority_weight": 0, 00:07:29.705 "medium_priority_weight": 0, 00:07:29.705 "high_priority_weight": 0, 00:07:29.705 "nvme_adminq_poll_period_us": 10000, 00:07:29.705 "nvme_ioq_poll_period_us": 0, 00:07:29.705 "io_queue_requests": 0, 00:07:29.705 "delay_cmd_submit": true, 00:07:29.705 "transport_retry_count": 4, 00:07:29.705 "bdev_retry_count": 3, 00:07:29.705 "transport_ack_timeout": 0, 00:07:29.705 "ctrlr_loss_timeout_sec": 0, 00:07:29.705 "reconnect_delay_sec": 0, 00:07:29.705 "fast_io_fail_timeout_sec": 0, 00:07:29.705 "disable_auto_failback": false, 00:07:29.705 "generate_uuids": false, 00:07:29.705 "transport_tos": 0, 00:07:29.705 "nvme_error_stat": false, 00:07:29.705 "rdma_srq_size": 0, 00:07:29.705 "io_path_stat": false, 00:07:29.705 "allow_accel_sequence": false, 00:07:29.705 "rdma_max_cq_size": 0, 00:07:29.705 "rdma_cm_event_timeout_ms": 0, 00:07:29.705 "dhchap_digests": [ 00:07:29.705 "sha256", 00:07:29.705 "sha384", 00:07:29.705 "sha512" 00:07:29.705 ], 00:07:29.705 "dhchap_dhgroups": [ 00:07:29.705 "null", 00:07:29.705 "ffdhe2048", 00:07:29.705 "ffdhe3072", 00:07:29.705 "ffdhe4096", 00:07:29.705 "ffdhe6144", 00:07:29.705 "ffdhe8192" 00:07:29.705 ] 00:07:29.705 } 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "method": "bdev_nvme_set_hotplug", 00:07:29.705 "params": { 00:07:29.705 "period_us": 100000, 00:07:29.705 "enable": false 00:07:29.705 } 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "method": "bdev_wait_for_examine" 00:07:29.705 } 00:07:29.705 ] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "scsi", 00:07:29.705 "config": null 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "scheduler", 00:07:29.705 "config": [ 00:07:29.705 { 00:07:29.705 "method": "framework_set_scheduler", 00:07:29.705 "params": { 00:07:29.705 "name": "static" 00:07:29.705 } 00:07:29.705 } 00:07:29.705 ] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "vhost_scsi", 00:07:29.705 "config": [] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "vhost_blk", 00:07:29.705 "config": [] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "ublk", 00:07:29.705 "config": [] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "nbd", 00:07:29.705 "config": [] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "nvmf", 00:07:29.705 "config": [ 00:07:29.705 { 00:07:29.705 "method": "nvmf_set_config", 00:07:29.705 "params": { 00:07:29.705 "discovery_filter": "match_any", 00:07:29.705 "admin_cmd_passthru": { 00:07:29.705 "identify_ctrlr": false 00:07:29.705 } 00:07:29.705 } 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "method": "nvmf_set_max_subsystems", 00:07:29.705 "params": { 00:07:29.705 "max_subsystems": 1024 00:07:29.705 } 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "method": "nvmf_set_crdt", 00:07:29.705 "params": { 00:07:29.705 "crdt1": 0, 00:07:29.705 "crdt2": 0, 00:07:29.705 "crdt3": 0 00:07:29.705 } 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "method": "nvmf_create_transport", 00:07:29.705 "params": { 00:07:29.705 "trtype": "TCP", 00:07:29.705 "max_queue_depth": 128, 00:07:29.705 "max_io_qpairs_per_ctrlr": 127, 00:07:29.705 "in_capsule_data_size": 4096, 00:07:29.705 "max_io_size": 131072, 00:07:29.705 "io_unit_size": 131072, 00:07:29.705 "max_aq_depth": 128, 00:07:29.705 "num_shared_buffers": 511, 00:07:29.705 "buf_cache_size": 4294967295, 00:07:29.705 "dif_insert_or_strip": false, 00:07:29.705 "zcopy": false, 00:07:29.705 "c2h_success": true, 00:07:29.705 "sock_priority": 0, 00:07:29.705 "abort_timeout_sec": 1, 00:07:29.705 "ack_timeout": 0, 00:07:29.705 "data_wr_pool_size": 0 00:07:29.705 } 00:07:29.705 } 00:07:29.705 ] 00:07:29.705 }, 00:07:29.705 { 00:07:29.705 "subsystem": "iscsi", 00:07:29.705 "config": [ 00:07:29.705 { 00:07:29.705 "method": "iscsi_set_options", 00:07:29.705 "params": { 00:07:29.705 "node_base": "iqn.2016-06.io.spdk", 00:07:29.705 "max_sessions": 128, 00:07:29.705 "max_connections_per_session": 2, 00:07:29.705 "max_queue_depth": 64, 00:07:29.705 "default_time2wait": 2, 00:07:29.705 "default_time2retain": 20, 00:07:29.705 "first_burst_length": 8192, 00:07:29.705 "immediate_data": true, 00:07:29.705 "allow_duplicated_isid": false, 00:07:29.705 "error_recovery_level": 0, 00:07:29.705 "nop_timeout": 60, 00:07:29.705 "nop_in_interval": 30, 00:07:29.705 "disable_chap": false, 00:07:29.705 "require_chap": false, 00:07:29.705 "mutual_chap": false, 00:07:29.705 "chap_group": 0, 00:07:29.705 "max_large_datain_per_connection": 64, 00:07:29.705 "max_r2t_per_connection": 4, 00:07:29.705 "pdu_pool_size": 36864, 00:07:29.705 "immediate_data_pool_size": 16384, 00:07:29.705 "data_out_pool_size": 2048 00:07:29.705 } 00:07:29.705 } 00:07:29.705 ] 00:07:29.705 } 00:07:29.705 ] 00:07:29.705 } 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2399280 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2399280 ']' 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2399280 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2399280 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2399280' 00:07:29.705 killing process with pid 2399280 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2399280 00:07:29.705 14:01:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2399280 00:07:30.272 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2399426 00:07:30.272 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:30.272 14:01:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:35.581 14:01:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2399426 00:07:35.581 14:01:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2399426 ']' 00:07:35.581 14:01:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2399426 00:07:35.581 14:01:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:35.581 14:01:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.581 14:01:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2399426 00:07:35.581 14:01:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.581 14:01:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.581 14:01:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2399426' 00:07:35.581 killing process with pid 2399426 00:07:35.581 14:01:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2399426 00:07:35.581 14:01:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2399426 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:35.841 00:07:35.841 real 0m6.736s 00:07:35.841 user 0m6.314s 00:07:35.841 sys 0m0.763s 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:35.841 ************************************ 00:07:35.841 END TEST skip_rpc_with_json 00:07:35.841 ************************************ 00:07:35.841 14:01:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:35.841 14:01:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.841 14:01:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.841 14:01:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.841 ************************************ 00:07:35.841 START TEST skip_rpc_with_delay 00:07:35.841 ************************************ 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:35.841 [2024-07-26 14:01:52.677753] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:35.841 [2024-07-26 14:01:52.677994] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.841 00:07:35.841 real 0m0.152s 00:07:35.841 user 0m0.103s 00:07:35.841 sys 0m0.047s 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.841 14:01:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:35.841 ************************************ 00:07:35.841 END TEST skip_rpc_with_delay 00:07:35.841 ************************************ 00:07:36.100 14:01:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:36.100 14:01:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:36.100 14:01:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:36.100 14:01:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.100 14:01:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.100 14:01:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.100 ************************************ 00:07:36.100 START TEST exit_on_failed_rpc_init 00:07:36.100 ************************************ 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2400140 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2400140 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2400140 ']' 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.100 14:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:36.100 [2024-07-26 14:01:52.849208] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:36.101 [2024-07-26 14:01:52.849309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400140 ] 00:07:36.101 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.101 [2024-07-26 14:01:52.917712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.359 [2024-07-26 14:01:53.039491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:36.617 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.617 [2024-07-26 14:01:53.413710] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:36.617 [2024-07-26 14:01:53.413806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400152 ] 00:07:36.617 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.617 [2024-07-26 14:01:53.500360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.874 [2024-07-26 14:01:53.624150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.874 [2024-07-26 14:01:53.624275] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:36.874 [2024-07-26 14:01:53.624297] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:36.874 [2024-07-26 14:01:53.624310] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2400140 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2400140 ']' 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2400140 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2400140 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2400140' 00:07:37.131 killing process with pid 2400140 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2400140 00:07:37.131 14:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2400140 00:07:37.696 00:07:37.696 real 0m1.500s 00:07:37.696 user 0m1.729s 00:07:37.696 sys 0m0.528s 00:07:37.696 14:01:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.696 14:01:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:37.696 ************************************ 00:07:37.696 END TEST exit_on_failed_rpc_init 00:07:37.696 ************************************ 00:07:37.696 14:01:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:37.696 00:07:37.696 real 0m14.261s 00:07:37.696 user 0m13.461s 00:07:37.696 sys 0m1.924s 00:07:37.696 14:01:54 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.696 14:01:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.696 ************************************ 00:07:37.696 END TEST skip_rpc 00:07:37.696 ************************************ 00:07:37.696 14:01:54 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:37.696 14:01:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.696 14:01:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.696 14:01:54 -- common/autotest_common.sh@10 -- # set +x 00:07:37.696 ************************************ 00:07:37.696 START TEST rpc_client 00:07:37.696 ************************************ 00:07:37.696 14:01:54 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:37.696 * Looking for test storage... 00:07:37.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:37.696 14:01:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:37.696 OK 00:07:37.696 14:01:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:37.696 00:07:37.696 real 0m0.091s 00:07:37.696 user 0m0.038s 00:07:37.696 sys 0m0.059s 00:07:37.696 14:01:54 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.696 14:01:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:37.696 ************************************ 00:07:37.696 END TEST rpc_client 00:07:37.696 ************************************ 00:07:37.696 14:01:54 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:37.696 14:01:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.696 14:01:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.696 14:01:54 -- common/autotest_common.sh@10 -- # set +x 00:07:37.696 ************************************ 00:07:37.696 START TEST json_config 00:07:37.696 ************************************ 00:07:37.696 14:01:54 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:37.954 14:01:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.954 14:01:54 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.954 14:01:54 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.954 14:01:54 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.954 14:01:54 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.954 14:01:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.955 14:01:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.955 14:01:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.955 14:01:54 json_config -- paths/export.sh@5 -- # export PATH 00:07:37.955 14:01:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@47 -- # : 0 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.955 14:01:54 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:37.955 INFO: JSON configuration test init 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.955 14:01:54 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:37.955 14:01:54 json_config -- json_config/common.sh@9 -- # local app=target 00:07:37.955 14:01:54 json_config -- json_config/common.sh@10 -- # shift 00:07:37.955 14:01:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:37.955 14:01:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:37.955 14:01:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:37.955 14:01:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:37.955 14:01:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:37.955 14:01:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2400448 00:07:37.955 14:01:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:37.955 14:01:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:37.955 Waiting for target to run... 00:07:37.955 14:01:54 json_config -- json_config/common.sh@25 -- # waitforlisten 2400448 /var/tmp/spdk_tgt.sock 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@831 -- # '[' -z 2400448 ']' 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:37.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.955 14:01:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.955 [2024-07-26 14:01:54.685613] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:37.955 [2024-07-26 14:01:54.685734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400448 ] 00:07:37.955 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.520 [2024-07-26 14:01:55.247034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.520 [2024-07-26 14:01:55.355794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.083 14:01:55 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.083 14:01:55 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:39.083 14:01:55 json_config -- json_config/common.sh@26 -- # echo '' 00:07:39.083 00:07:39.084 14:01:55 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:39.084 14:01:55 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:39.084 14:01:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.084 14:01:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:39.084 14:01:55 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:39.084 14:01:55 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:39.084 14:01:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.084 14:01:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:39.084 14:01:55 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:39.084 14:01:55 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:39.084 14:01:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:43.264 14:01:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.264 14:01:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:43.264 14:01:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@51 -- # sort 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:43.264 14:01:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:43.264 14:01:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:43.264 14:01:59 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:43.264 14:01:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.264 14:01:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:43.265 14:01:59 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:43.265 14:01:59 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:07:43.265 14:01:59 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:07:43.265 14:01:59 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:43.265 14:01:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:43.522 MallocForNvmf0 00:07:43.522 14:02:00 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:43.522 14:02:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:44.088 MallocForNvmf1 00:07:44.088 14:02:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:44.088 14:02:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:44.653 [2024-07-26 14:02:01.469378] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.653 14:02:01 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.653 14:02:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:45.218 14:02:02 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:45.218 14:02:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:45.783 14:02:02 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:45.783 14:02:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:46.039 14:02:02 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:46.039 14:02:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:46.604 [2024-07-26 14:02:03.379420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:46.604 14:02:03 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:46.604 14:02:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.604 14:02:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.604 14:02:03 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:46.604 14:02:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.604 14:02:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.604 14:02:03 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:46.604 14:02:03 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:46.604 14:02:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:47.169 MallocBdevForConfigChangeCheck 00:07:47.169 14:02:03 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:47.169 14:02:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.169 14:02:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:47.169 14:02:04 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:47.169 14:02:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:47.735 14:02:04 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:47.735 INFO: shutting down applications... 00:07:47.735 14:02:04 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:47.735 14:02:04 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:47.735 14:02:04 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:47.735 14:02:04 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:49.634 Calling clear_iscsi_subsystem 00:07:49.634 Calling clear_nvmf_subsystem 00:07:49.634 Calling clear_nbd_subsystem 00:07:49.634 Calling clear_ublk_subsystem 00:07:49.634 Calling clear_vhost_blk_subsystem 00:07:49.634 Calling clear_vhost_scsi_subsystem 00:07:49.634 Calling clear_bdev_subsystem 00:07:49.634 14:02:06 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:49.634 14:02:06 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:49.634 14:02:06 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:49.634 14:02:06 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:49.634 14:02:06 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:49.634 14:02:06 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:49.924 14:02:06 json_config -- json_config/json_config.sh@349 -- # break 00:07:49.924 14:02:06 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:49.924 14:02:06 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:49.924 14:02:06 json_config -- json_config/common.sh@31 -- # local app=target 00:07:49.924 14:02:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:49.924 14:02:06 json_config -- json_config/common.sh@35 -- # [[ -n 2400448 ]] 00:07:49.924 14:02:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2400448 00:07:49.924 14:02:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:49.924 14:02:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:49.924 14:02:06 json_config -- json_config/common.sh@41 -- # kill -0 2400448 00:07:49.924 14:02:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:50.493 14:02:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:50.493 14:02:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:50.493 14:02:07 json_config -- json_config/common.sh@41 -- # kill -0 2400448 00:07:50.493 14:02:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:50.493 14:02:07 json_config -- json_config/common.sh@43 -- # break 00:07:50.493 14:02:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:50.493 14:02:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:50.493 SPDK target shutdown done 00:07:50.494 14:02:07 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:50.494 INFO: relaunching applications... 00:07:50.494 14:02:07 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:50.494 14:02:07 json_config -- json_config/common.sh@9 -- # local app=target 00:07:50.494 14:02:07 json_config -- json_config/common.sh@10 -- # shift 00:07:50.494 14:02:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:50.494 14:02:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:50.494 14:02:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:50.494 14:02:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:50.494 14:02:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:50.494 14:02:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2402017 00:07:50.494 14:02:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:50.494 14:02:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:50.494 Waiting for target to run... 00:07:50.494 14:02:07 json_config -- json_config/common.sh@25 -- # waitforlisten 2402017 /var/tmp/spdk_tgt.sock 00:07:50.494 14:02:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 2402017 ']' 00:07:50.494 14:02:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:50.494 14:02:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.494 14:02:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:50.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:50.494 14:02:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.494 14:02:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:50.494 [2024-07-26 14:02:07.298477] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:50.494 [2024-07-26 14:02:07.298586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2402017 ] 00:07:50.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.062 [2024-07-26 14:02:07.918466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.320 [2024-07-26 14:02:08.027331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.604 [2024-07-26 14:02:11.075643] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.604 [2024-07-26 14:02:11.108175] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:54.604 14:02:11 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.604 14:02:11 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:54.604 14:02:11 json_config -- json_config/common.sh@26 -- # echo '' 00:07:54.604 00:07:54.604 14:02:11 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:54.604 14:02:11 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:54.604 INFO: Checking if target configuration is the same... 00:07:54.604 14:02:11 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:54.604 14:02:11 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:54.604 14:02:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:54.604 + '[' 2 -ne 2 ']' 00:07:54.604 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:54.604 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:54.604 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:54.604 +++ basename /dev/fd/62 00:07:54.604 ++ mktemp /tmp/62.XXX 00:07:54.604 + tmp_file_1=/tmp/62.4vK 00:07:54.604 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:54.604 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:54.604 + tmp_file_2=/tmp/spdk_tgt_config.json.NSm 00:07:54.604 + ret=0 00:07:54.604 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:54.862 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:54.862 + diff -u /tmp/62.4vK /tmp/spdk_tgt_config.json.NSm 00:07:54.862 + echo 'INFO: JSON config files are the same' 00:07:54.862 INFO: JSON config files are the same 00:07:54.862 + rm /tmp/62.4vK /tmp/spdk_tgt_config.json.NSm 00:07:54.862 + exit 0 00:07:54.862 14:02:11 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:54.862 14:02:11 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:54.862 INFO: changing configuration and checking if this can be detected... 00:07:54.862 14:02:11 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:54.862 14:02:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:55.428 14:02:12 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:55.428 14:02:12 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:55.428 14:02:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:55.428 + '[' 2 -ne 2 ']' 00:07:55.428 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:55.428 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:55.428 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:55.428 +++ basename /dev/fd/62 00:07:55.428 ++ mktemp /tmp/62.XXX 00:07:55.428 + tmp_file_1=/tmp/62.28m 00:07:55.428 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:55.428 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:55.428 + tmp_file_2=/tmp/spdk_tgt_config.json.eCs 00:07:55.428 + ret=0 00:07:55.428 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:56.362 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:56.362 + diff -u /tmp/62.28m /tmp/spdk_tgt_config.json.eCs 00:07:56.362 + ret=1 00:07:56.362 + echo '=== Start of file: /tmp/62.28m ===' 00:07:56.362 + cat /tmp/62.28m 00:07:56.362 + echo '=== End of file: /tmp/62.28m ===' 00:07:56.362 + echo '' 00:07:56.362 + echo '=== Start of file: /tmp/spdk_tgt_config.json.eCs ===' 00:07:56.362 + cat /tmp/spdk_tgt_config.json.eCs 00:07:56.362 + echo '=== End of file: /tmp/spdk_tgt_config.json.eCs ===' 00:07:56.362 + echo '' 00:07:56.362 + rm /tmp/62.28m /tmp/spdk_tgt_config.json.eCs 00:07:56.362 + exit 1 00:07:56.362 14:02:12 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:56.362 INFO: configuration change detected. 00:07:56.362 14:02:12 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:56.362 14:02:12 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:56.362 14:02:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.362 14:02:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:56.362 14:02:12 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:56.362 14:02:12 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:56.362 14:02:12 json_config -- json_config/json_config.sh@321 -- # [[ -n 2402017 ]] 00:07:56.362 14:02:12 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:56.363 14:02:12 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:56.363 14:02:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.363 14:02:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:56.363 14:02:12 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:56.363 14:02:12 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:56.363 14:02:12 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:56.363 14:02:12 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:56.363 14:02:12 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:56.363 14:02:12 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:56.363 14:02:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.363 14:02:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:56.363 14:02:13 json_config -- json_config/json_config.sh@327 -- # killprocess 2402017 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@950 -- # '[' -z 2402017 ']' 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@954 -- # kill -0 2402017 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@955 -- # uname 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2402017 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2402017' 00:07:56.363 killing process with pid 2402017 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@969 -- # kill 2402017 00:07:56.363 14:02:13 json_config -- common/autotest_common.sh@974 -- # wait 2402017 00:07:58.263 14:02:14 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:58.263 14:02:14 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:58.263 14:02:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:58.263 14:02:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.263 14:02:14 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:58.263 14:02:14 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:58.263 INFO: Success 00:07:58.263 00:07:58.263 real 0m20.213s 00:07:58.263 user 0m25.737s 00:07:58.263 sys 0m3.024s 00:07:58.263 14:02:14 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.263 14:02:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.263 ************************************ 00:07:58.263 END TEST json_config 00:07:58.263 ************************************ 00:07:58.263 14:02:14 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:58.263 14:02:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.263 14:02:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.263 14:02:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.263 ************************************ 00:07:58.263 START TEST json_config_extra_key 00:07:58.263 ************************************ 00:07:58.263 14:02:14 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:58.263 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.263 14:02:14 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.263 14:02:14 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.263 14:02:14 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.263 14:02:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.263 14:02:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.263 14:02:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.263 14:02:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:58.263 14:02:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.263 14:02:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.264 14:02:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.264 14:02:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.264 14:02:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.264 14:02:14 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.264 14:02:14 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:58.264 INFO: launching applications... 00:07:58.264 14:02:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2403029 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:58.264 Waiting for target to run... 00:07:58.264 14:02:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2403029 /var/tmp/spdk_tgt.sock 00:07:58.264 14:02:14 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2403029 ']' 00:07:58.264 14:02:14 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:58.264 14:02:14 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.264 14:02:14 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:58.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:58.264 14:02:14 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.264 14:02:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:58.264 [2024-07-26 14:02:14.959535] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:07:58.264 [2024-07-26 14:02:14.959646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403029 ] 00:07:58.264 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.522 [2024-07-26 14:02:15.358466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.780 [2024-07-26 14:02:15.452566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.715 14:02:16 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.715 14:02:16 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:59.715 00:07:59.715 14:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:59.715 INFO: shutting down applications... 00:07:59.715 14:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2403029 ]] 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2403029 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2403029 00:07:59.715 14:02:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:59.974 14:02:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:59.974 14:02:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:59.974 14:02:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2403029 00:07:59.974 14:02:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:00.542 14:02:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:00.542 14:02:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:00.542 14:02:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2403029 00:08:00.542 14:02:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:00.542 14:02:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:00.542 14:02:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:00.542 14:02:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:00.542 SPDK target shutdown done 00:08:00.542 14:02:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:00.542 Success 00:08:00.542 00:08:00.542 real 0m2.447s 00:08:00.542 user 0m2.226s 00:08:00.542 sys 0m0.556s 00:08:00.542 14:02:17 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.542 14:02:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:00.542 ************************************ 00:08:00.542 END TEST json_config_extra_key 00:08:00.542 ************************************ 00:08:00.542 14:02:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:00.542 14:02:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.542 14:02:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.542 14:02:17 -- common/autotest_common.sh@10 -- # set +x 00:08:00.542 ************************************ 00:08:00.542 START TEST alias_rpc 00:08:00.542 ************************************ 00:08:00.542 14:02:17 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:00.542 * Looking for test storage... 00:08:00.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:00.542 14:02:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:00.542 14:02:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2403352 00:08:00.542 14:02:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:00.542 14:02:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2403352 00:08:00.542 14:02:17 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2403352 ']' 00:08:00.542 14:02:17 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.542 14:02:17 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.542 14:02:17 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.542 14:02:17 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.542 14:02:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.801 [2024-07-26 14:02:17.475564] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:00.801 [2024-07-26 14:02:17.475664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403352 ] 00:08:00.801 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.801 [2024-07-26 14:02:17.549739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.801 [2024-07-26 14:02:17.677040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.366 14:02:17 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.366 14:02:17 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.366 14:02:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:01.624 14:02:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2403352 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2403352 ']' 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2403352 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2403352 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2403352' 00:08:01.624 killing process with pid 2403352 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@969 -- # kill 2403352 00:08:01.624 14:02:18 alias_rpc -- common/autotest_common.sh@974 -- # wait 2403352 00:08:02.190 00:08:02.190 real 0m1.497s 00:08:02.190 user 0m1.672s 00:08:02.190 sys 0m0.495s 00:08:02.191 14:02:18 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.191 14:02:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.191 ************************************ 00:08:02.191 END TEST alias_rpc 00:08:02.191 ************************************ 00:08:02.191 14:02:18 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:08:02.191 14:02:18 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:02.191 14:02:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.191 14:02:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.191 14:02:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.191 ************************************ 00:08:02.191 START TEST spdkcli_tcp 00:08:02.191 ************************************ 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:02.191 * Looking for test storage... 00:08:02.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2403655 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:02.191 14:02:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2403655 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2403655 ']' 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.191 14:02:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:02.191 [2024-07-26 14:02:19.050556] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:02.191 [2024-07-26 14:02:19.050655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403655 ] 00:08:02.449 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.449 [2024-07-26 14:02:19.119781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.449 [2024-07-26 14:02:19.243123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.449 [2024-07-26 14:02:19.243130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.707 14:02:19 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.707 14:02:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:02.707 14:02:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2403665 00:08:02.707 14:02:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:02.707 14:02:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:03.273 [ 00:08:03.273 "bdev_malloc_delete", 00:08:03.273 "bdev_malloc_create", 00:08:03.273 "bdev_null_resize", 00:08:03.273 "bdev_null_delete", 00:08:03.273 "bdev_null_create", 00:08:03.273 "bdev_nvme_cuse_unregister", 00:08:03.273 "bdev_nvme_cuse_register", 00:08:03.273 "bdev_opal_new_user", 00:08:03.273 "bdev_opal_set_lock_state", 00:08:03.273 "bdev_opal_delete", 00:08:03.273 "bdev_opal_get_info", 00:08:03.274 "bdev_opal_create", 00:08:03.274 "bdev_nvme_opal_revert", 00:08:03.274 "bdev_nvme_opal_init", 00:08:03.274 "bdev_nvme_send_cmd", 00:08:03.274 "bdev_nvme_get_path_iostat", 00:08:03.274 "bdev_nvme_get_mdns_discovery_info", 00:08:03.274 "bdev_nvme_stop_mdns_discovery", 00:08:03.274 "bdev_nvme_start_mdns_discovery", 00:08:03.274 "bdev_nvme_set_multipath_policy", 00:08:03.274 "bdev_nvme_set_preferred_path", 00:08:03.274 "bdev_nvme_get_io_paths", 00:08:03.274 "bdev_nvme_remove_error_injection", 00:08:03.274 "bdev_nvme_add_error_injection", 00:08:03.274 "bdev_nvme_get_discovery_info", 00:08:03.274 "bdev_nvme_stop_discovery", 00:08:03.274 "bdev_nvme_start_discovery", 00:08:03.274 "bdev_nvme_get_controller_health_info", 00:08:03.274 "bdev_nvme_disable_controller", 00:08:03.274 "bdev_nvme_enable_controller", 00:08:03.274 "bdev_nvme_reset_controller", 00:08:03.274 "bdev_nvme_get_transport_statistics", 00:08:03.274 "bdev_nvme_apply_firmware", 00:08:03.274 "bdev_nvme_detach_controller", 00:08:03.274 "bdev_nvme_get_controllers", 00:08:03.274 "bdev_nvme_attach_controller", 00:08:03.274 "bdev_nvme_set_hotplug", 00:08:03.274 "bdev_nvme_set_options", 00:08:03.274 "bdev_passthru_delete", 00:08:03.274 "bdev_passthru_create", 00:08:03.274 "bdev_lvol_set_parent_bdev", 00:08:03.274 "bdev_lvol_set_parent", 00:08:03.274 "bdev_lvol_check_shallow_copy", 00:08:03.274 "bdev_lvol_start_shallow_copy", 00:08:03.274 "bdev_lvol_grow_lvstore", 00:08:03.274 "bdev_lvol_get_lvols", 00:08:03.274 "bdev_lvol_get_lvstores", 00:08:03.274 "bdev_lvol_delete", 00:08:03.274 "bdev_lvol_set_read_only", 00:08:03.274 "bdev_lvol_resize", 00:08:03.274 "bdev_lvol_decouple_parent", 00:08:03.274 "bdev_lvol_inflate", 00:08:03.274 "bdev_lvol_rename", 00:08:03.274 "bdev_lvol_clone_bdev", 00:08:03.274 "bdev_lvol_clone", 00:08:03.274 "bdev_lvol_snapshot", 00:08:03.274 "bdev_lvol_create", 00:08:03.274 "bdev_lvol_delete_lvstore", 00:08:03.274 "bdev_lvol_rename_lvstore", 00:08:03.274 "bdev_lvol_create_lvstore", 00:08:03.274 "bdev_raid_set_options", 00:08:03.274 "bdev_raid_remove_base_bdev", 00:08:03.274 "bdev_raid_add_base_bdev", 00:08:03.274 "bdev_raid_delete", 00:08:03.274 "bdev_raid_create", 00:08:03.274 "bdev_raid_get_bdevs", 00:08:03.274 "bdev_error_inject_error", 00:08:03.274 "bdev_error_delete", 00:08:03.274 "bdev_error_create", 00:08:03.274 "bdev_split_delete", 00:08:03.274 "bdev_split_create", 00:08:03.274 "bdev_delay_delete", 00:08:03.274 "bdev_delay_create", 00:08:03.274 "bdev_delay_update_latency", 00:08:03.274 "bdev_zone_block_delete", 00:08:03.274 "bdev_zone_block_create", 00:08:03.274 "blobfs_create", 00:08:03.274 "blobfs_detect", 00:08:03.274 "blobfs_set_cache_size", 00:08:03.274 "bdev_aio_delete", 00:08:03.274 "bdev_aio_rescan", 00:08:03.274 "bdev_aio_create", 00:08:03.274 "bdev_ftl_set_property", 00:08:03.274 "bdev_ftl_get_properties", 00:08:03.274 "bdev_ftl_get_stats", 00:08:03.274 "bdev_ftl_unmap", 00:08:03.274 "bdev_ftl_unload", 00:08:03.274 "bdev_ftl_delete", 00:08:03.274 "bdev_ftl_load", 00:08:03.274 "bdev_ftl_create", 00:08:03.274 "bdev_virtio_attach_controller", 00:08:03.274 "bdev_virtio_scsi_get_devices", 00:08:03.274 "bdev_virtio_detach_controller", 00:08:03.274 "bdev_virtio_blk_set_hotplug", 00:08:03.274 "bdev_iscsi_delete", 00:08:03.274 "bdev_iscsi_create", 00:08:03.274 "bdev_iscsi_set_options", 00:08:03.274 "accel_error_inject_error", 00:08:03.274 "ioat_scan_accel_module", 00:08:03.274 "dsa_scan_accel_module", 00:08:03.274 "iaa_scan_accel_module", 00:08:03.274 "vfu_virtio_create_scsi_endpoint", 00:08:03.274 "vfu_virtio_scsi_remove_target", 00:08:03.274 "vfu_virtio_scsi_add_target", 00:08:03.274 "vfu_virtio_create_blk_endpoint", 00:08:03.274 "vfu_virtio_delete_endpoint", 00:08:03.274 "keyring_file_remove_key", 00:08:03.274 "keyring_file_add_key", 00:08:03.274 "keyring_linux_set_options", 00:08:03.274 "iscsi_get_histogram", 00:08:03.274 "iscsi_enable_histogram", 00:08:03.274 "iscsi_set_options", 00:08:03.274 "iscsi_get_auth_groups", 00:08:03.274 "iscsi_auth_group_remove_secret", 00:08:03.274 "iscsi_auth_group_add_secret", 00:08:03.274 "iscsi_delete_auth_group", 00:08:03.274 "iscsi_create_auth_group", 00:08:03.274 "iscsi_set_discovery_auth", 00:08:03.274 "iscsi_get_options", 00:08:03.274 "iscsi_target_node_request_logout", 00:08:03.274 "iscsi_target_node_set_redirect", 00:08:03.274 "iscsi_target_node_set_auth", 00:08:03.274 "iscsi_target_node_add_lun", 00:08:03.274 "iscsi_get_stats", 00:08:03.274 "iscsi_get_connections", 00:08:03.274 "iscsi_portal_group_set_auth", 00:08:03.274 "iscsi_start_portal_group", 00:08:03.274 "iscsi_delete_portal_group", 00:08:03.274 "iscsi_create_portal_group", 00:08:03.274 "iscsi_get_portal_groups", 00:08:03.274 "iscsi_delete_target_node", 00:08:03.274 "iscsi_target_node_remove_pg_ig_maps", 00:08:03.274 "iscsi_target_node_add_pg_ig_maps", 00:08:03.274 "iscsi_create_target_node", 00:08:03.274 "iscsi_get_target_nodes", 00:08:03.274 "iscsi_delete_initiator_group", 00:08:03.274 "iscsi_initiator_group_remove_initiators", 00:08:03.274 "iscsi_initiator_group_add_initiators", 00:08:03.274 "iscsi_create_initiator_group", 00:08:03.274 "iscsi_get_initiator_groups", 00:08:03.274 "nvmf_set_crdt", 00:08:03.274 "nvmf_set_config", 00:08:03.274 "nvmf_set_max_subsystems", 00:08:03.274 "nvmf_stop_mdns_prr", 00:08:03.274 "nvmf_publish_mdns_prr", 00:08:03.274 "nvmf_subsystem_get_listeners", 00:08:03.274 "nvmf_subsystem_get_qpairs", 00:08:03.274 "nvmf_subsystem_get_controllers", 00:08:03.274 "nvmf_get_stats", 00:08:03.274 "nvmf_get_transports", 00:08:03.274 "nvmf_create_transport", 00:08:03.274 "nvmf_get_targets", 00:08:03.274 "nvmf_delete_target", 00:08:03.274 "nvmf_create_target", 00:08:03.274 "nvmf_subsystem_allow_any_host", 00:08:03.274 "nvmf_subsystem_remove_host", 00:08:03.274 "nvmf_subsystem_add_host", 00:08:03.274 "nvmf_ns_remove_host", 00:08:03.274 "nvmf_ns_add_host", 00:08:03.274 "nvmf_subsystem_remove_ns", 00:08:03.274 "nvmf_subsystem_add_ns", 00:08:03.274 "nvmf_subsystem_listener_set_ana_state", 00:08:03.274 "nvmf_discovery_get_referrals", 00:08:03.274 "nvmf_discovery_remove_referral", 00:08:03.274 "nvmf_discovery_add_referral", 00:08:03.274 "nvmf_subsystem_remove_listener", 00:08:03.274 "nvmf_subsystem_add_listener", 00:08:03.274 "nvmf_delete_subsystem", 00:08:03.274 "nvmf_create_subsystem", 00:08:03.274 "nvmf_get_subsystems", 00:08:03.274 "env_dpdk_get_mem_stats", 00:08:03.274 "nbd_get_disks", 00:08:03.274 "nbd_stop_disk", 00:08:03.274 "nbd_start_disk", 00:08:03.274 "ublk_recover_disk", 00:08:03.274 "ublk_get_disks", 00:08:03.274 "ublk_stop_disk", 00:08:03.274 "ublk_start_disk", 00:08:03.274 "ublk_destroy_target", 00:08:03.274 "ublk_create_target", 00:08:03.274 "virtio_blk_create_transport", 00:08:03.274 "virtio_blk_get_transports", 00:08:03.274 "vhost_controller_set_coalescing", 00:08:03.274 "vhost_get_controllers", 00:08:03.274 "vhost_delete_controller", 00:08:03.274 "vhost_create_blk_controller", 00:08:03.274 "vhost_scsi_controller_remove_target", 00:08:03.274 "vhost_scsi_controller_add_target", 00:08:03.274 "vhost_start_scsi_controller", 00:08:03.274 "vhost_create_scsi_controller", 00:08:03.274 "thread_set_cpumask", 00:08:03.274 "framework_get_governor", 00:08:03.274 "framework_get_scheduler", 00:08:03.274 "framework_set_scheduler", 00:08:03.274 "framework_get_reactors", 00:08:03.274 "thread_get_io_channels", 00:08:03.274 "thread_get_pollers", 00:08:03.274 "thread_get_stats", 00:08:03.274 "framework_monitor_context_switch", 00:08:03.274 "spdk_kill_instance", 00:08:03.274 "log_enable_timestamps", 00:08:03.274 "log_get_flags", 00:08:03.274 "log_clear_flag", 00:08:03.274 "log_set_flag", 00:08:03.274 "log_get_level", 00:08:03.274 "log_set_level", 00:08:03.274 "log_get_print_level", 00:08:03.274 "log_set_print_level", 00:08:03.274 "framework_enable_cpumask_locks", 00:08:03.274 "framework_disable_cpumask_locks", 00:08:03.274 "framework_wait_init", 00:08:03.274 "framework_start_init", 00:08:03.274 "scsi_get_devices", 00:08:03.274 "bdev_get_histogram", 00:08:03.274 "bdev_enable_histogram", 00:08:03.274 "bdev_set_qos_limit", 00:08:03.274 "bdev_set_qd_sampling_period", 00:08:03.274 "bdev_get_bdevs", 00:08:03.274 "bdev_reset_iostat", 00:08:03.274 "bdev_get_iostat", 00:08:03.274 "bdev_examine", 00:08:03.274 "bdev_wait_for_examine", 00:08:03.274 "bdev_set_options", 00:08:03.274 "notify_get_notifications", 00:08:03.274 "notify_get_types", 00:08:03.274 "accel_get_stats", 00:08:03.274 "accel_set_options", 00:08:03.274 "accel_set_driver", 00:08:03.274 "accel_crypto_key_destroy", 00:08:03.274 "accel_crypto_keys_get", 00:08:03.274 "accel_crypto_key_create", 00:08:03.274 "accel_assign_opc", 00:08:03.274 "accel_get_module_info", 00:08:03.274 "accel_get_opc_assignments", 00:08:03.274 "vmd_rescan", 00:08:03.274 "vmd_remove_device", 00:08:03.274 "vmd_enable", 00:08:03.274 "sock_get_default_impl", 00:08:03.275 "sock_set_default_impl", 00:08:03.275 "sock_impl_set_options", 00:08:03.275 "sock_impl_get_options", 00:08:03.275 "iobuf_get_stats", 00:08:03.275 "iobuf_set_options", 00:08:03.275 "keyring_get_keys", 00:08:03.275 "framework_get_pci_devices", 00:08:03.275 "framework_get_config", 00:08:03.275 "framework_get_subsystems", 00:08:03.275 "vfu_tgt_set_base_path", 00:08:03.275 "trace_get_info", 00:08:03.275 "trace_get_tpoint_group_mask", 00:08:03.275 "trace_disable_tpoint_group", 00:08:03.275 "trace_enable_tpoint_group", 00:08:03.275 "trace_clear_tpoint_mask", 00:08:03.275 "trace_set_tpoint_mask", 00:08:03.275 "spdk_get_version", 00:08:03.275 "rpc_get_methods" 00:08:03.275 ] 00:08:03.275 14:02:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:03.275 14:02:19 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.275 14:02:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.275 14:02:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:03.275 14:02:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2403655 00:08:03.275 14:02:19 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2403655 ']' 00:08:03.275 14:02:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2403655 00:08:03.275 14:02:19 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:03.275 14:02:19 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.275 14:02:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2403655 00:08:03.275 14:02:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.275 14:02:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.275 14:02:20 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2403655' 00:08:03.275 killing process with pid 2403655 00:08:03.275 14:02:20 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2403655 00:08:03.275 14:02:20 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2403655 00:08:03.842 00:08:03.842 real 0m1.601s 00:08:03.842 user 0m2.966s 00:08:03.842 sys 0m0.554s 00:08:03.842 14:02:20 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.842 14:02:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.842 ************************************ 00:08:03.842 END TEST spdkcli_tcp 00:08:03.842 ************************************ 00:08:03.842 14:02:20 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:03.842 14:02:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.842 14:02:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.842 14:02:20 -- common/autotest_common.sh@10 -- # set +x 00:08:03.842 ************************************ 00:08:03.842 START TEST dpdk_mem_utility 00:08:03.842 ************************************ 00:08:03.842 14:02:20 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:03.842 * Looking for test storage... 00:08:03.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:03.842 14:02:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:03.842 14:02:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2403861 00:08:03.842 14:02:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:03.842 14:02:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2403861 00:08:03.842 14:02:20 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2403861 ']' 00:08:03.842 14:02:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.842 14:02:20 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.842 14:02:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.842 14:02:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.842 14:02:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:03.842 [2024-07-26 14:02:20.715414] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:03.842 [2024-07-26 14:02:20.715539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403861 ] 00:08:04.100 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.100 [2024-07-26 14:02:20.788877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.100 [2024-07-26 14:02:20.911652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.359 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.359 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:04.359 14:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:04.359 14:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:04.359 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.359 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:04.359 { 00:08:04.359 "filename": "/tmp/spdk_mem_dump.txt" 00:08:04.359 } 00:08:04.359 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.359 14:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:04.359 DPDK memory size 814.000000 MiB in 1 heap(s) 00:08:04.359 1 heaps totaling size 814.000000 MiB 00:08:04.359 size: 814.000000 MiB heap id: 0 00:08:04.359 end heaps---------- 00:08:04.359 8 mempools totaling size 598.116089 MiB 00:08:04.359 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:04.359 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:04.359 size: 84.521057 MiB name: bdev_io_2403861 00:08:04.359 size: 51.011292 MiB name: evtpool_2403861 00:08:04.359 size: 50.003479 MiB name: msgpool_2403861 00:08:04.359 size: 21.763794 MiB name: PDU_Pool 00:08:04.359 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:04.359 size: 0.026123 MiB name: Session_Pool 00:08:04.359 end mempools------- 00:08:04.359 6 memzones totaling size 4.142822 MiB 00:08:04.359 size: 1.000366 MiB name: RG_ring_0_2403861 00:08:04.359 size: 1.000366 MiB name: RG_ring_1_2403861 00:08:04.359 size: 1.000366 MiB name: RG_ring_4_2403861 00:08:04.359 size: 1.000366 MiB name: RG_ring_5_2403861 00:08:04.359 size: 0.125366 MiB name: RG_ring_2_2403861 00:08:04.359 size: 0.015991 MiB name: RG_ring_3_2403861 00:08:04.359 end memzones------- 00:08:04.359 14:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:04.618 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:08:04.619 list of free elements. size: 12.519348 MiB 00:08:04.619 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:04.619 element at address: 0x200018e00000 with size: 0.999878 MiB 00:08:04.619 element at address: 0x200019000000 with size: 0.999878 MiB 00:08:04.619 element at address: 0x200003e00000 with size: 0.996277 MiB 00:08:04.619 element at address: 0x200031c00000 with size: 0.994446 MiB 00:08:04.619 element at address: 0x200013800000 with size: 0.978699 MiB 00:08:04.619 element at address: 0x200007000000 with size: 0.959839 MiB 00:08:04.619 element at address: 0x200019200000 with size: 0.936584 MiB 00:08:04.619 element at address: 0x200000200000 with size: 0.841614 MiB 00:08:04.619 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:08:04.619 element at address: 0x20000b200000 with size: 0.490723 MiB 00:08:04.619 element at address: 0x200000800000 with size: 0.487793 MiB 00:08:04.619 element at address: 0x200019400000 with size: 0.485657 MiB 00:08:04.619 element at address: 0x200027e00000 with size: 0.410034 MiB 00:08:04.619 element at address: 0x200003a00000 with size: 0.355530 MiB 00:08:04.619 list of standard malloc elements. size: 199.218079 MiB 00:08:04.619 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:08:04.619 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:08:04.619 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:04.619 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:08:04.619 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:04.619 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:04.619 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:08:04.619 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:04.619 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:08:04.619 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:04.619 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:08:04.619 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200003adb300 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200003adb500 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200003affa80 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:08:04.619 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:08:04.619 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:08:04.619 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:08:04.619 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:08:04.619 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:08:04.619 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200027e69040 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:08:04.619 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:08:04.619 list of memzone associated elements. size: 602.262573 MiB 00:08:04.619 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:08:04.619 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:04.619 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:08:04.619 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:04.619 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:08:04.619 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2403861_0 00:08:04.619 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:04.619 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2403861_0 00:08:04.619 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:04.619 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2403861_0 00:08:04.619 element at address: 0x2000195be940 with size: 20.255554 MiB 00:08:04.619 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:04.619 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:08:04.619 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:04.619 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:04.619 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2403861 00:08:04.619 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:04.619 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2403861 00:08:04.619 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:04.619 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2403861 00:08:04.619 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:08:04.619 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:04.619 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:08:04.619 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:04.619 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:08:04.619 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:04.619 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:08:04.619 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:04.619 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:04.619 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2403861 00:08:04.619 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:04.619 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2403861 00:08:04.619 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:08:04.619 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2403861 00:08:04.619 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:08:04.619 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2403861 00:08:04.619 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:08:04.619 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2403861 00:08:04.619 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:08:04.619 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:04.619 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:08:04.619 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:04.619 element at address: 0x20001947c540 with size: 0.250488 MiB 00:08:04.619 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:04.619 element at address: 0x200003adf880 with size: 0.125488 MiB 00:08:04.619 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2403861 00:08:04.619 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:08:04.619 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:04.619 element at address: 0x200027e69100 with size: 0.023743 MiB 00:08:04.620 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:04.620 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:08:04.620 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2403861 00:08:04.620 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:08:04.620 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:04.620 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:08:04.620 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2403861 00:08:04.620 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:08:04.620 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2403861 00:08:04.620 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:08:04.620 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:04.620 14:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:04.620 14:02:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2403861 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2403861 ']' 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2403861 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2403861 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2403861' 00:08:04.620 killing process with pid 2403861 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2403861 00:08:04.620 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2403861 00:08:05.220 00:08:05.220 real 0m1.243s 00:08:05.220 user 0m1.252s 00:08:05.220 sys 0m0.453s 00:08:05.220 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.220 14:02:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:05.220 ************************************ 00:08:05.220 END TEST dpdk_mem_utility 00:08:05.220 ************************************ 00:08:05.220 14:02:21 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:05.220 14:02:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.220 14:02:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.220 14:02:21 -- common/autotest_common.sh@10 -- # set +x 00:08:05.220 ************************************ 00:08:05.220 START TEST event 00:08:05.220 ************************************ 00:08:05.220 14:02:21 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:05.220 * Looking for test storage... 00:08:05.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:05.220 14:02:21 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:05.220 14:02:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:05.220 14:02:21 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:05.220 14:02:21 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:05.220 14:02:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.220 14:02:21 event -- common/autotest_common.sh@10 -- # set +x 00:08:05.220 ************************************ 00:08:05.220 START TEST event_perf 00:08:05.220 ************************************ 00:08:05.220 14:02:21 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:05.220 Running I/O for 1 seconds...[2024-07-26 14:02:22.011363] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:05.220 [2024-07-26 14:02:22.011505] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404055 ] 00:08:05.220 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.220 [2024-07-26 14:02:22.100374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.478 [2024-07-26 14:02:22.230971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.478 [2024-07-26 14:02:22.231032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.479 [2024-07-26 14:02:22.231082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.479 [2024-07-26 14:02:22.231085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.852 Running I/O for 1 seconds... 00:08:06.852 lcore 0: 208700 00:08:06.852 lcore 1: 208700 00:08:06.852 lcore 2: 208700 00:08:06.852 lcore 3: 208700 00:08:06.852 done. 00:08:06.852 00:08:06.852 real 0m1.371s 00:08:06.852 user 0m4.250s 00:08:06.852 sys 0m0.113s 00:08:06.852 14:02:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.852 14:02:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.852 ************************************ 00:08:06.852 END TEST event_perf 00:08:06.852 ************************************ 00:08:06.852 14:02:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:06.852 14:02:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:06.852 14:02:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.852 14:02:23 event -- common/autotest_common.sh@10 -- # set +x 00:08:06.852 ************************************ 00:08:06.852 START TEST event_reactor 00:08:06.852 ************************************ 00:08:06.852 14:02:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:06.852 [2024-07-26 14:02:23.438661] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:06.852 [2024-07-26 14:02:23.438764] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404338 ] 00:08:06.852 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.852 [2024-07-26 14:02:23.517096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.852 [2024-07-26 14:02:23.640970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.226 test_start 00:08:08.226 oneshot 00:08:08.226 tick 100 00:08:08.226 tick 100 00:08:08.226 tick 250 00:08:08.226 tick 100 00:08:08.226 tick 100 00:08:08.226 tick 100 00:08:08.226 tick 250 00:08:08.226 tick 500 00:08:08.226 tick 100 00:08:08.226 tick 100 00:08:08.226 tick 250 00:08:08.226 tick 100 00:08:08.226 tick 100 00:08:08.226 test_end 00:08:08.226 00:08:08.226 real 0m1.353s 00:08:08.226 user 0m1.245s 00:08:08.226 sys 0m0.103s 00:08:08.226 14:02:24 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.226 14:02:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:08.226 ************************************ 00:08:08.226 END TEST event_reactor 00:08:08.226 ************************************ 00:08:08.226 14:02:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:08.226 14:02:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:08.226 14:02:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.226 14:02:24 event -- common/autotest_common.sh@10 -- # set +x 00:08:08.226 ************************************ 00:08:08.226 START TEST event_reactor_perf 00:08:08.226 ************************************ 00:08:08.226 14:02:24 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:08.226 [2024-07-26 14:02:24.839603] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:08.226 [2024-07-26 14:02:24.839668] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404493 ] 00:08:08.226 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.226 [2024-07-26 14:02:24.910837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.226 [2024-07-26 14:02:25.035099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.599 test_start 00:08:09.599 test_end 00:08:09.599 Performance: 352842 events per second 00:08:09.599 00:08:09.599 real 0m1.340s 00:08:09.599 user 0m1.246s 00:08:09.599 sys 0m0.088s 00:08:09.599 14:02:26 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.599 14:02:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:09.599 ************************************ 00:08:09.599 END TEST event_reactor_perf 00:08:09.599 ************************************ 00:08:09.599 14:02:26 event -- event/event.sh@49 -- # uname -s 00:08:09.599 14:02:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:09.599 14:02:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:09.599 14:02:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.599 14:02:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.599 14:02:26 event -- common/autotest_common.sh@10 -- # set +x 00:08:09.599 ************************************ 00:08:09.599 START TEST event_scheduler 00:08:09.599 ************************************ 00:08:09.599 14:02:26 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:09.599 * Looking for test storage... 00:08:09.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:09.599 14:02:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:09.599 14:02:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2404671 00:08:09.599 14:02:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:09.599 14:02:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:09.599 14:02:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2404671 00:08:09.599 14:02:26 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2404671 ']' 00:08:09.599 14:02:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.599 14:02:26 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.599 14:02:26 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.599 14:02:26 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.599 14:02:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:09.599 [2024-07-26 14:02:26.376538] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:09.599 [2024-07-26 14:02:26.376630] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404671 ] 00:08:09.599 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.859 [2024-07-26 14:02:26.494324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.859 [2024-07-26 14:02:26.691587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.859 [2024-07-26 14:02:26.691645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.859 [2024-07-26 14:02:26.691699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.859 [2024-07-26 14:02:26.691702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:10.117 14:02:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:10.117 [2024-07-26 14:02:26.768615] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:10.117 [2024-07-26 14:02:26.768646] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:08:10.117 [2024-07-26 14:02:26.768665] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:10.117 [2024-07-26 14:02:26.768679] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:10.117 [2024-07-26 14:02:26.768691] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.117 14:02:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:10.117 [2024-07-26 14:02:26.930985] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.117 14:02:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.117 14:02:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:10.117 ************************************ 00:08:10.117 START TEST scheduler_create_thread 00:08:10.117 ************************************ 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.117 2 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.117 3 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.117 14:02:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.375 4 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.375 5 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.375 6 00:08:10.375 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.376 7 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.376 8 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.376 9 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.376 10 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.376 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.634 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.634 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:10.634 14:02:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:10.634 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.634 14:02:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.569 14:02:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.569 14:02:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:11.569 14:02:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.569 14:02:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.502 14:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.502 14:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:12.502 14:02:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:12.502 14:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.502 14:02:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.435 14:02:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.435 00:08:13.435 real 0m3.233s 00:08:13.435 user 0m0.020s 00:08:13.435 sys 0m0.003s 00:08:13.435 14:02:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.435 14:02:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.435 ************************************ 00:08:13.435 END TEST scheduler_create_thread 00:08:13.435 ************************************ 00:08:13.435 14:02:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:13.435 14:02:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2404671 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2404671 ']' 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2404671 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2404671 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2404671' 00:08:13.435 killing process with pid 2404671 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2404671 00:08:13.435 14:02:30 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2404671 00:08:14.001 [2024-07-26 14:02:30.585389] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:14.259 00:08:14.259 real 0m4.789s 00:08:14.259 user 0m8.176s 00:08:14.259 sys 0m0.487s 00:08:14.259 14:02:31 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.259 14:02:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:14.259 ************************************ 00:08:14.259 END TEST event_scheduler 00:08:14.259 ************************************ 00:08:14.259 14:02:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:14.259 14:02:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:14.259 14:02:31 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.259 14:02:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.259 14:02:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:14.259 ************************************ 00:08:14.259 START TEST app_repeat 00:08:14.259 ************************************ 00:08:14.259 14:02:31 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:14.259 14:02:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.259 14:02:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2405258 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2405258' 00:08:14.260 Process app_repeat pid: 2405258 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:14.260 spdk_app_start Round 0 00:08:14.260 14:02:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2405258 /var/tmp/spdk-nbd.sock 00:08:14.260 14:02:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2405258 ']' 00:08:14.260 14:02:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:14.260 14:02:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.260 14:02:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:14.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:14.260 14:02:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.260 14:02:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:14.260 [2024-07-26 14:02:31.128373] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:14.260 [2024-07-26 14:02:31.128456] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405258 ] 00:08:14.518 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.518 [2024-07-26 14:02:31.196635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.518 [2024-07-26 14:02:31.318447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.518 [2024-07-26 14:02:31.318454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.774 14:02:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.774 14:02:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:14.774 14:02:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:15.069 Malloc0 00:08:15.069 14:02:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:15.634 Malloc1 00:08:15.634 14:02:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:15.634 14:02:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:16.199 /dev/nbd0 00:08:16.199 14:02:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:16.199 14:02:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:16.199 1+0 records in 00:08:16.199 1+0 records out 00:08:16.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214472 s, 19.1 MB/s 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:16.199 14:02:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:16.199 14:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:16.199 14:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.199 14:02:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:16.459 /dev/nbd1 00:08:16.460 14:02:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:16.460 14:02:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:16.460 1+0 records in 00:08:16.460 1+0 records out 00:08:16.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217745 s, 18.8 MB/s 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:16.460 14:02:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:16.460 14:02:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:16.460 14:02:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.460 14:02:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.460 14:02:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.460 14:02:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:16.724 14:02:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:16.724 { 00:08:16.724 "nbd_device": "/dev/nbd0", 00:08:16.724 "bdev_name": "Malloc0" 00:08:16.724 }, 00:08:16.724 { 00:08:16.724 "nbd_device": "/dev/nbd1", 00:08:16.724 "bdev_name": "Malloc1" 00:08:16.724 } 00:08:16.724 ]' 00:08:16.724 14:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:16.724 { 00:08:16.724 "nbd_device": "/dev/nbd0", 00:08:16.724 "bdev_name": "Malloc0" 00:08:16.724 }, 00:08:16.724 { 00:08:16.724 "nbd_device": "/dev/nbd1", 00:08:16.724 "bdev_name": "Malloc1" 00:08:16.724 } 00:08:16.724 ]' 00:08:16.724 14:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:16.981 /dev/nbd1' 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:16.981 /dev/nbd1' 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:16.981 256+0 records in 00:08:16.981 256+0 records out 00:08:16.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513382 s, 204 MB/s 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:16.981 256+0 records in 00:08:16.981 256+0 records out 00:08:16.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023851 s, 44.0 MB/s 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:16.981 256+0 records in 00:08:16.981 256+0 records out 00:08:16.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256292 s, 40.9 MB/s 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:16.981 14:02:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:16.982 14:02:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.982 14:02:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.239 14:02:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.804 14:02:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:18.061 14:02:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:18.062 14:02:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:18.062 14:02:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:18.319 14:02:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:18.577 [2024-07-26 14:02:35.379115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:18.835 [2024-07-26 14:02:35.500142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.835 [2024-07-26 14:02:35.500142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.835 [2024-07-26 14:02:35.563676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:18.835 [2024-07-26 14:02:35.563752] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:21.387 14:02:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:21.387 14:02:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:21.387 spdk_app_start Round 1 00:08:21.387 14:02:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2405258 /var/tmp/spdk-nbd.sock 00:08:21.387 14:02:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2405258 ']' 00:08:21.387 14:02:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:21.387 14:02:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.387 14:02:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:21.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:21.387 14:02:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.387 14:02:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:21.952 14:02:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.952 14:02:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:21.952 14:02:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.210 Malloc0 00:08:22.210 14:02:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.777 Malloc1 00:08:22.777 14:02:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.777 14:02:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:23.036 /dev/nbd0 00:08:23.036 14:02:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:23.036 14:02:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.036 1+0 records in 00:08:23.036 1+0 records out 00:08:23.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188637 s, 21.7 MB/s 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:23.036 14:02:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:23.036 14:02:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.036 14:02:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.036 14:02:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:23.294 /dev/nbd1 00:08:23.294 14:02:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:23.294 14:02:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.294 1+0 records in 00:08:23.294 1+0 records out 00:08:23.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206536 s, 19.8 MB/s 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:23.294 14:02:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:23.294 14:02:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.294 14:02:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.294 14:02:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:23.294 14:02:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.294 14:02:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.860 { 00:08:23.860 "nbd_device": "/dev/nbd0", 00:08:23.860 "bdev_name": "Malloc0" 00:08:23.860 }, 00:08:23.860 { 00:08:23.860 "nbd_device": "/dev/nbd1", 00:08:23.860 "bdev_name": "Malloc1" 00:08:23.860 } 00:08:23.860 ]' 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.860 { 00:08:23.860 "nbd_device": "/dev/nbd0", 00:08:23.860 "bdev_name": "Malloc0" 00:08:23.860 }, 00:08:23.860 { 00:08:23.860 "nbd_device": "/dev/nbd1", 00:08:23.860 "bdev_name": "Malloc1" 00:08:23.860 } 00:08:23.860 ]' 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:23.860 /dev/nbd1' 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:23.860 /dev/nbd1' 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:23.860 14:02:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:24.118 256+0 records in 00:08:24.118 256+0 records out 00:08:24.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00795694 s, 132 MB/s 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:24.118 256+0 records in 00:08:24.118 256+0 records out 00:08:24.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241419 s, 43.4 MB/s 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:24.118 256+0 records in 00:08:24.118 256+0 records out 00:08:24.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253303 s, 41.4 MB/s 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.118 14:02:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.376 14:02:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:24.942 14:02:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:24.942 14:02:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:24.942 14:02:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:24.942 14:02:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.942 14:02:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.942 14:02:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:24.942 14:02:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.942 14:02:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.199 14:02:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.199 14:02:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.199 14:02:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:25.457 14:02:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:25.457 14:02:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:26.023 14:02:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:26.282 [2024-07-26 14:02:42.995940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:26.282 [2024-07-26 14:02:43.116606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.282 [2024-07-26 14:02:43.116610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.540 [2024-07-26 14:02:43.180649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:26.540 [2024-07-26 14:02:43.180719] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:29.067 14:02:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:29.067 14:02:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:29.067 spdk_app_start Round 2 00:08:29.067 14:02:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2405258 /var/tmp/spdk-nbd.sock 00:08:29.067 14:02:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2405258 ']' 00:08:29.067 14:02:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:29.067 14:02:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.067 14:02:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:29.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:29.067 14:02:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.067 14:02:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:29.326 14:02:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.326 14:02:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:29.326 14:02:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:29.892 Malloc0 00:08:29.892 14:02:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.458 Malloc1 00:08:30.458 14:02:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.458 14:02:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:31.024 /dev/nbd0 00:08:31.024 14:02:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.024 14:02:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.024 1+0 records in 00:08:31.024 1+0 records out 00:08:31.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163645 s, 25.0 MB/s 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:31.024 14:02:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:31.024 14:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.024 14:02:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.024 14:02:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:31.589 /dev/nbd1 00:08:31.589 14:02:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:31.589 14:02:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.589 1+0 records in 00:08:31.589 1+0 records out 00:08:31.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305553 s, 13.4 MB/s 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:31.589 14:02:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:31.589 14:02:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.589 14:02:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.589 14:02:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.589 14:02:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.589 14:02:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.848 { 00:08:31.848 "nbd_device": "/dev/nbd0", 00:08:31.848 "bdev_name": "Malloc0" 00:08:31.848 }, 00:08:31.848 { 00:08:31.848 "nbd_device": "/dev/nbd1", 00:08:31.848 "bdev_name": "Malloc1" 00:08:31.848 } 00:08:31.848 ]' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.848 { 00:08:31.848 "nbd_device": "/dev/nbd0", 00:08:31.848 "bdev_name": "Malloc0" 00:08:31.848 }, 00:08:31.848 { 00:08:31.848 "nbd_device": "/dev/nbd1", 00:08:31.848 "bdev_name": "Malloc1" 00:08:31.848 } 00:08:31.848 ]' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:31.848 /dev/nbd1' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:31.848 /dev/nbd1' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:31.848 256+0 records in 00:08:31.848 256+0 records out 00:08:31.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681819 s, 154 MB/s 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:31.848 256+0 records in 00:08:31.848 256+0 records out 00:08:31.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240474 s, 43.6 MB/s 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:31.848 256+0 records in 00:08:31.848 256+0 records out 00:08:31.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257114 s, 40.8 MB/s 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.848 14:02:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.414 14:02:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.672 14:02:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:32.930 14:02:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:32.930 14:02:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:33.496 14:02:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:33.754 [2024-07-26 14:02:50.446008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:33.754 [2024-07-26 14:02:50.568473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.754 [2024-07-26 14:02:50.568478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.754 [2024-07-26 14:02:50.631898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:33.754 [2024-07-26 14:02:50.631978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:36.330 14:02:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2405258 /var/tmp/spdk-nbd.sock 00:08:36.330 14:02:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2405258 ']' 00:08:36.330 14:02:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.330 14:02:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.330 14:02:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.330 14:02:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.330 14:02:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:36.897 14:02:53 event.app_repeat -- event/event.sh@39 -- # killprocess 2405258 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2405258 ']' 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2405258 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2405258 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2405258' 00:08:36.897 killing process with pid 2405258 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2405258 00:08:36.897 14:02:53 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2405258 00:08:37.155 spdk_app_start is called in Round 0. 00:08:37.155 Shutdown signal received, stop current app iteration 00:08:37.155 Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 reinitialization... 00:08:37.155 spdk_app_start is called in Round 1. 00:08:37.155 Shutdown signal received, stop current app iteration 00:08:37.155 Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 reinitialization... 00:08:37.155 spdk_app_start is called in Round 2. 00:08:37.155 Shutdown signal received, stop current app iteration 00:08:37.155 Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 reinitialization... 00:08:37.155 spdk_app_start is called in Round 3. 00:08:37.155 Shutdown signal received, stop current app iteration 00:08:37.156 14:02:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:37.156 14:02:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:37.156 00:08:37.156 real 0m22.880s 00:08:37.156 user 0m52.083s 00:08:37.156 sys 0m4.532s 00:08:37.156 14:02:53 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.156 14:02:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:37.156 ************************************ 00:08:37.156 END TEST app_repeat 00:08:37.156 ************************************ 00:08:37.156 14:02:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:37.156 14:02:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:37.156 14:02:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.156 14:02:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.156 14:02:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:37.414 ************************************ 00:08:37.414 START TEST cpu_locks 00:08:37.414 ************************************ 00:08:37.414 14:02:54 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:37.414 * Looking for test storage... 00:08:37.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:37.414 14:02:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:37.414 14:02:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:37.414 14:02:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:37.414 14:02:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:37.414 14:02:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.414 14:02:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.414 14:02:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.414 ************************************ 00:08:37.414 START TEST default_locks 00:08:37.414 ************************************ 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2408140 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2408140 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2408140 ']' 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.414 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.414 [2024-07-26 14:02:54.218190] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:37.414 [2024-07-26 14:02:54.218301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408140 ] 00:08:37.414 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.414 [2024-07-26 14:02:54.293520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.673 [2024-07-26 14:02:54.416324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.930 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.930 14:02:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:37.930 14:02:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2408140 00:08:37.930 14:02:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2408140 00:08:37.930 14:02:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:38.496 lslocks: write error 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2408140 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2408140 ']' 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2408140 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2408140 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2408140' 00:08:38.496 killing process with pid 2408140 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2408140 00:08:38.496 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2408140 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2408140 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2408140 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2408140 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2408140 ']' 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2408140) - No such process 00:08:39.063 ERROR: process (pid: 2408140) is no longer running 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:39.063 00:08:39.063 real 0m1.589s 00:08:39.063 user 0m1.548s 00:08:39.063 sys 0m0.726s 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.063 14:02:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.063 ************************************ 00:08:39.063 END TEST default_locks 00:08:39.063 ************************************ 00:08:39.063 14:02:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:39.063 14:02:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.063 14:02:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.063 14:02:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.063 ************************************ 00:08:39.063 START TEST default_locks_via_rpc 00:08:39.063 ************************************ 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2408426 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2408426 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2408426 ']' 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.063 14:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.063 [2024-07-26 14:02:55.903193] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:39.063 [2024-07-26 14:02:55.903381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408426 ] 00:08:39.321 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.321 [2024-07-26 14:02:55.995838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.321 [2024-07-26 14:02:56.121948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2408426 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2408426 00:08:39.579 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:39.837 14:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2408426 00:08:39.837 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2408426 ']' 00:08:39.837 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2408426 00:08:39.837 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:39.837 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.837 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2408426 00:08:40.095 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.095 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.095 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2408426' 00:08:40.095 killing process with pid 2408426 00:08:40.095 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2408426 00:08:40.095 14:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2408426 00:08:40.354 00:08:40.354 real 0m1.426s 00:08:40.354 user 0m1.403s 00:08:40.354 sys 0m0.612s 00:08:40.354 14:02:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.354 14:02:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.354 ************************************ 00:08:40.354 END TEST default_locks_via_rpc 00:08:40.354 ************************************ 00:08:40.612 14:02:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:40.612 14:02:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.612 14:02:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.612 14:02:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.612 ************************************ 00:08:40.612 START TEST non_locking_app_on_locked_coremask 00:08:40.612 ************************************ 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2408588 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2408588 /var/tmp/spdk.sock 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2408588 ']' 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.612 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.613 [2024-07-26 14:02:57.363639] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:40.613 [2024-07-26 14:02:57.363764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408588 ] 00:08:40.613 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.613 [2024-07-26 14:02:57.437944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.871 [2024-07-26 14:02:57.562243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2408670 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2408670 /var/tmp/spdk2.sock 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2408670 ']' 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:41.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.129 14:02:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:41.129 [2024-07-26 14:02:57.898201] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:41.129 [2024-07-26 14:02:57.898300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408670 ] 00:08:41.129 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.129 [2024-07-26 14:02:57.995979] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:41.129 [2024-07-26 14:02:57.996012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.388 [2024-07-26 14:02:58.249511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.954 14:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.954 14:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:41.954 14:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2408588 00:08:41.954 14:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:41.954 14:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2408588 00:08:43.364 lslocks: write error 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2408588 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2408588 ']' 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2408588 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2408588 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2408588' 00:08:43.364 killing process with pid 2408588 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2408588 00:08:43.364 14:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2408588 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2408670 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2408670 ']' 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2408670 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2408670 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2408670' 00:08:44.341 killing process with pid 2408670 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2408670 00:08:44.341 14:03:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2408670 00:08:44.600 00:08:44.600 real 0m4.146s 00:08:44.600 user 0m4.411s 00:08:44.600 sys 0m1.488s 00:08:44.600 14:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.600 14:03:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.600 ************************************ 00:08:44.600 END TEST non_locking_app_on_locked_coremask 00:08:44.600 ************************************ 00:08:44.600 14:03:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:44.600 14:03:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.600 14:03:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.600 14:03:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:44.859 ************************************ 00:08:44.859 START TEST locking_app_on_unlocked_coremask 00:08:44.859 ************************************ 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2409220 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2409220 /var/tmp/spdk.sock 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2409220 ']' 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.859 14:03:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.859 [2024-07-26 14:03:01.586142] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:44.859 [2024-07-26 14:03:01.586285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409220 ] 00:08:44.859 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.859 [2024-07-26 14:03:01.669094] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:44.859 [2024-07-26 14:03:01.669139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.118 [2024-07-26 14:03:01.793371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2409256 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2409256 /var/tmp/spdk2.sock 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2409256 ']' 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:45.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.376 14:03:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:45.376 [2024-07-26 14:03:02.116380] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:45.376 [2024-07-26 14:03:02.116497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409256 ] 00:08:45.376 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.376 [2024-07-26 14:03:02.221452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.634 [2024-07-26 14:03:02.469369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.567 14:03:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.567 14:03:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:46.567 14:03:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2409256 00:08:46.567 14:03:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:46.567 14:03:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2409256 00:08:47.507 lslocks: write error 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2409220 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2409220 ']' 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2409220 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2409220 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2409220' 00:08:47.507 killing process with pid 2409220 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2409220 00:08:47.507 14:03:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2409220 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2409256 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2409256 ']' 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2409256 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2409256 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2409256' 00:08:48.881 killing process with pid 2409256 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2409256 00:08:48.881 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2409256 00:08:49.139 00:08:49.139 real 0m4.359s 00:08:49.139 user 0m4.707s 00:08:49.139 sys 0m1.477s 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.139 ************************************ 00:08:49.139 END TEST locking_app_on_unlocked_coremask 00:08:49.139 ************************************ 00:08:49.139 14:03:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:49.139 14:03:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.139 14:03:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.139 14:03:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.139 ************************************ 00:08:49.139 START TEST locking_app_on_locked_coremask 00:08:49.139 ************************************ 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2409830 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2409830 /var/tmp/spdk.sock 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2409830 ']' 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.139 14:03:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.139 [2024-07-26 14:03:05.983816] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:49.140 [2024-07-26 14:03:05.983913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409830 ] 00:08:49.140 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.397 [2024-07-26 14:03:06.056070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.397 [2024-07-26 14:03:06.177709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2409841 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2409841 /var/tmp/spdk2.sock 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2409841 /var/tmp/spdk2.sock 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2409841 /var/tmp/spdk2.sock 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2409841 ']' 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:49.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.656 14:03:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.656 [2024-07-26 14:03:06.515650] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:49.656 [2024-07-26 14:03:06.515738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409841 ] 00:08:49.914 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.914 [2024-07-26 14:03:06.610597] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2409830 has claimed it. 00:08:49.914 [2024-07-26 14:03:06.610648] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:50.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2409841) - No such process 00:08:50.479 ERROR: process (pid: 2409841) is no longer running 00:08:50.479 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.479 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:50.479 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:50.479 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.479 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.479 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.479 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2409830 00:08:50.479 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2409830 00:08:50.480 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:51.411 lslocks: write error 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2409830 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2409830 ']' 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2409830 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2409830 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2409830' 00:08:51.411 killing process with pid 2409830 00:08:51.411 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2409830 00:08:51.412 14:03:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2409830 00:08:51.670 00:08:51.670 real 0m2.526s 00:08:51.670 user 0m2.780s 00:08:51.670 sys 0m0.869s 00:08:51.670 14:03:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.670 14:03:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.670 ************************************ 00:08:51.670 END TEST locking_app_on_locked_coremask 00:08:51.670 ************************************ 00:08:51.670 14:03:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:51.670 14:03:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.670 14:03:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.670 14:03:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:51.670 ************************************ 00:08:51.670 START TEST locking_overlapped_coremask 00:08:51.670 ************************************ 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2410381 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2410381 /var/tmp/spdk.sock 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2410381 ']' 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.670 14:03:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.928 [2024-07-26 14:03:08.606910] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:51.928 [2024-07-26 14:03:08.607017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410381 ] 00:08:51.928 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.928 [2024-07-26 14:03:08.702251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.185 [2024-07-26 14:03:08.831122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.185 [2024-07-26 14:03:08.831181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.185 [2024-07-26 14:03:08.831185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2410512 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2410512 /var/tmp/spdk2.sock 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2410512 /var/tmp/spdk2.sock 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2410512 /var/tmp/spdk2.sock 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2410512 ']' 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:52.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.443 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.443 [2024-07-26 14:03:09.171304] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:52.443 [2024-07-26 14:03:09.171410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410512 ] 00:08:52.443 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.443 [2024-07-26 14:03:09.273724] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2410381 has claimed it. 00:08:52.443 [2024-07-26 14:03:09.273782] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:53.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2410512) - No such process 00:08:53.377 ERROR: process (pid: 2410512) is no longer running 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2410381 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2410381 ']' 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2410381 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2410381 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2410381' 00:08:53.377 killing process with pid 2410381 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2410381 00:08:53.377 14:03:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2410381 00:08:53.636 00:08:53.636 real 0m1.904s 00:08:53.636 user 0m4.999s 00:08:53.636 sys 0m0.516s 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.636 ************************************ 00:08:53.636 END TEST locking_overlapped_coremask 00:08:53.636 ************************************ 00:08:53.636 14:03:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:53.636 14:03:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.636 14:03:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.636 14:03:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:53.636 ************************************ 00:08:53.636 START TEST locking_overlapped_coremask_via_rpc 00:08:53.636 ************************************ 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2410927 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2410927 /var/tmp/spdk.sock 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2410927 ']' 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.636 14:03:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.895 [2024-07-26 14:03:10.579202] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:53.895 [2024-07-26 14:03:10.579385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410927 ] 00:08:53.895 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.895 [2024-07-26 14:03:10.674134] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:53.895 [2024-07-26 14:03:10.674178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.153 [2024-07-26 14:03:10.801125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.153 [2024-07-26 14:03:10.801179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.153 [2024-07-26 14:03:10.801182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2410951 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2410951 /var/tmp/spdk2.sock 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2410951 ']' 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:54.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.411 14:03:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.411 [2024-07-26 14:03:11.131511] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:54.411 [2024-07-26 14:03:11.131602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410951 ] 00:08:54.411 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.411 [2024-07-26 14:03:11.225592] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:54.411 [2024-07-26 14:03:11.225630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.670 [2024-07-26 14:03:11.475667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.670 [2024-07-26 14:03:11.479489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:54.670 [2024-07-26 14:03:11.479492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.237 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.237 [2024-07-26 14:03:12.093533] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2410927 has claimed it. 00:08:55.237 request: 00:08:55.237 { 00:08:55.237 "method": "framework_enable_cpumask_locks", 00:08:55.237 "req_id": 1 00:08:55.237 } 00:08:55.238 Got JSON-RPC error response 00:08:55.238 response: 00:08:55.238 { 00:08:55.238 "code": -32603, 00:08:55.238 "message": "Failed to claim CPU core: 2" 00:08:55.238 } 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2410927 /var/tmp/spdk.sock 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2410927 ']' 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.238 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2410951 /var/tmp/spdk2.sock 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2410951 ']' 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:55.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.803 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.369 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.369 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:56.369 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:56.369 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:56.369 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:56.369 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:56.369 00:08:56.369 real 0m2.476s 00:08:56.369 user 0m1.451s 00:08:56.369 sys 0m0.238s 00:08:56.369 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.369 14:03:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.369 ************************************ 00:08:56.369 END TEST locking_overlapped_coremask_via_rpc 00:08:56.369 ************************************ 00:08:56.369 14:03:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:56.369 14:03:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2410927 ]] 00:08:56.369 14:03:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2410927 00:08:56.369 14:03:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2410927 ']' 00:08:56.369 14:03:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2410927 00:08:56.369 14:03:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:56.369 14:03:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.369 14:03:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2410927 00:08:56.369 14:03:13 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.369 14:03:13 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.369 14:03:13 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2410927' 00:08:56.369 killing process with pid 2410927 00:08:56.369 14:03:13 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2410927 00:08:56.369 14:03:13 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2410927 00:08:56.935 14:03:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2410951 ]] 00:08:56.935 14:03:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2410951 00:08:56.935 14:03:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2410951 ']' 00:08:56.935 14:03:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2410951 00:08:56.935 14:03:13 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:56.936 14:03:13 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.936 14:03:13 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2410951 00:08:56.936 14:03:13 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:56.936 14:03:13 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:56.936 14:03:13 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2410951' 00:08:56.936 killing process with pid 2410951 00:08:56.936 14:03:13 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2410951 00:08:56.936 14:03:13 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2410951 00:08:57.504 14:03:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:57.504 14:03:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:57.504 14:03:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2410927 ]] 00:08:57.504 14:03:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2410927 00:08:57.504 14:03:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2410927 ']' 00:08:57.504 14:03:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2410927 00:08:57.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2410927) - No such process 00:08:57.504 14:03:14 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2410927 is not found' 00:08:57.504 Process with pid 2410927 is not found 00:08:57.504 14:03:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2410951 ]] 00:08:57.504 14:03:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2410951 00:08:57.504 14:03:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2410951 ']' 00:08:57.504 14:03:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2410951 00:08:57.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2410951) - No such process 00:08:57.504 14:03:14 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2410951 is not found' 00:08:57.504 Process with pid 2410951 is not found 00:08:57.504 14:03:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:57.504 00:08:57.504 real 0m20.153s 00:08:57.504 user 0m34.900s 00:08:57.504 sys 0m6.997s 00:08:57.504 14:03:14 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.504 14:03:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.504 ************************************ 00:08:57.504 END TEST cpu_locks 00:08:57.504 ************************************ 00:08:57.504 00:08:57.504 real 0m52.350s 00:08:57.504 user 1m42.079s 00:08:57.504 sys 0m12.629s 00:08:57.504 14:03:14 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.504 14:03:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:57.504 ************************************ 00:08:57.504 END TEST event 00:08:57.504 ************************************ 00:08:57.504 14:03:14 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:57.504 14:03:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.504 14:03:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.504 14:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.504 ************************************ 00:08:57.504 START TEST thread 00:08:57.504 ************************************ 00:08:57.504 14:03:14 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:57.504 * Looking for test storage... 00:08:57.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:57.504 14:03:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:57.504 14:03:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:57.504 14:03:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.504 14:03:14 thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.763 ************************************ 00:08:57.763 START TEST thread_poller_perf 00:08:57.763 ************************************ 00:08:57.763 14:03:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:57.763 [2024-07-26 14:03:14.422587] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:57.763 [2024-07-26 14:03:14.422733] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411441 ] 00:08:57.763 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.763 [2024-07-26 14:03:14.516346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.763 [2024-07-26 14:03:14.642034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.763 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:59.136 ====================================== 00:08:59.136 busy:2712629872 (cyc) 00:08:59.136 total_run_count: 292000 00:08:59.136 tsc_hz: 2700000000 (cyc) 00:08:59.136 ====================================== 00:08:59.136 poller_cost: 9289 (cyc), 3440 (nsec) 00:08:59.136 00:08:59.136 real 0m1.380s 00:08:59.136 user 0m1.261s 00:08:59.136 sys 0m0.112s 00:08:59.136 14:03:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.136 14:03:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:59.136 ************************************ 00:08:59.136 END TEST thread_poller_perf 00:08:59.136 ************************************ 00:08:59.136 14:03:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:59.136 14:03:15 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:59.136 14:03:15 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.136 14:03:15 thread -- common/autotest_common.sh@10 -- # set +x 00:08:59.136 ************************************ 00:08:59.136 START TEST thread_poller_perf 00:08:59.136 ************************************ 00:08:59.136 14:03:15 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:59.136 [2024-07-26 14:03:15.879392] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:08:59.136 [2024-07-26 14:03:15.879562] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411598 ] 00:08:59.136 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.136 [2024-07-26 14:03:15.984958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.394 [2024-07-26 14:03:16.111201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.394 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:00.768 ====================================== 00:09:00.768 busy:2702960944 (cyc) 00:09:00.768 total_run_count: 3858000 00:09:00.768 tsc_hz: 2700000000 (cyc) 00:09:00.768 ====================================== 00:09:00.768 poller_cost: 700 (cyc), 259 (nsec) 00:09:00.768 00:09:00.768 real 0m1.388s 00:09:00.768 user 0m1.268s 00:09:00.768 sys 0m0.113s 00:09:00.768 14:03:17 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.768 14:03:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:00.768 ************************************ 00:09:00.768 END TEST thread_poller_perf 00:09:00.768 ************************************ 00:09:00.768 14:03:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:00.768 00:09:00.768 real 0m2.982s 00:09:00.768 user 0m2.632s 00:09:00.768 sys 0m0.350s 00:09:00.768 14:03:17 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.768 14:03:17 thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.768 ************************************ 00:09:00.768 END TEST thread 00:09:00.768 ************************************ 00:09:00.768 14:03:17 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:09:00.768 14:03:17 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:00.768 14:03:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.768 14:03:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.768 14:03:17 -- common/autotest_common.sh@10 -- # set +x 00:09:00.768 ************************************ 00:09:00.768 START TEST app_cmdline 00:09:00.768 ************************************ 00:09:00.768 14:03:17 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:00.768 * Looking for test storage... 00:09:00.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:00.768 14:03:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:00.768 14:03:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2411911 00:09:00.768 14:03:17 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:00.768 14:03:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2411911 00:09:00.768 14:03:17 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2411911 ']' 00:09:00.768 14:03:17 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.768 14:03:17 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.769 14:03:17 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.769 14:03:17 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.769 14:03:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:00.769 [2024-07-26 14:03:17.465858] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:09:00.769 [2024-07-26 14:03:17.465954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411911 ] 00:09:00.769 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.769 [2024-07-26 14:03:17.533795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.027 [2024-07-26 14:03:17.655610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.285 14:03:17 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.285 14:03:17 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:01.285 14:03:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:01.544 { 00:09:01.544 "version": "SPDK v24.09-pre git sha1 dcc54343a", 00:09:01.544 "fields": { 00:09:01.544 "major": 24, 00:09:01.544 "minor": 9, 00:09:01.544 "patch": 0, 00:09:01.544 "suffix": "-pre", 00:09:01.544 "commit": "dcc54343a" 00:09:01.544 } 00:09:01.544 } 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:01.544 14:03:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:01.544 14:03:18 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.802 request: 00:09:01.802 { 00:09:01.802 "method": "env_dpdk_get_mem_stats", 00:09:01.802 "req_id": 1 00:09:01.802 } 00:09:01.802 Got JSON-RPC error response 00:09:01.802 response: 00:09:01.802 { 00:09:01.802 "code": -32601, 00:09:01.802 "message": "Method not found" 00:09:01.802 } 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.060 14:03:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2411911 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2411911 ']' 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2411911 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2411911 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2411911' 00:09:02.060 killing process with pid 2411911 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@969 -- # kill 2411911 00:09:02.060 14:03:18 app_cmdline -- common/autotest_common.sh@974 -- # wait 2411911 00:09:02.627 00:09:02.627 real 0m1.886s 00:09:02.627 user 0m2.403s 00:09:02.627 sys 0m0.543s 00:09:02.627 14:03:19 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.627 14:03:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:02.627 ************************************ 00:09:02.627 END TEST app_cmdline 00:09:02.627 ************************************ 00:09:02.627 14:03:19 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:02.627 14:03:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.627 14:03:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.627 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:09:02.627 ************************************ 00:09:02.627 START TEST version 00:09:02.627 ************************************ 00:09:02.627 14:03:19 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:02.627 * Looking for test storage... 00:09:02.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:02.627 14:03:19 version -- app/version.sh@17 -- # get_header_version major 00:09:02.627 14:03:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.627 14:03:19 version -- app/version.sh@14 -- # cut -f2 00:09:02.627 14:03:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.627 14:03:19 version -- app/version.sh@17 -- # major=24 00:09:02.627 14:03:19 version -- app/version.sh@18 -- # get_header_version minor 00:09:02.627 14:03:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.627 14:03:19 version -- app/version.sh@14 -- # cut -f2 00:09:02.627 14:03:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.627 14:03:19 version -- app/version.sh@18 -- # minor=9 00:09:02.627 14:03:19 version -- app/version.sh@19 -- # get_header_version patch 00:09:02.627 14:03:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.627 14:03:19 version -- app/version.sh@14 -- # cut -f2 00:09:02.627 14:03:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.627 14:03:19 version -- app/version.sh@19 -- # patch=0 00:09:02.627 14:03:19 version -- app/version.sh@20 -- # get_header_version suffix 00:09:02.627 14:03:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.627 14:03:19 version -- app/version.sh@14 -- # cut -f2 00:09:02.627 14:03:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.627 14:03:19 version -- app/version.sh@20 -- # suffix=-pre 00:09:02.627 14:03:19 version -- app/version.sh@22 -- # version=24.9 00:09:02.628 14:03:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:02.628 14:03:19 version -- app/version.sh@28 -- # version=24.9rc0 00:09:02.628 14:03:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:02.628 14:03:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:02.628 14:03:19 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:02.628 14:03:19 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:02.628 00:09:02.628 real 0m0.137s 00:09:02.628 user 0m0.075s 00:09:02.628 sys 0m0.087s 00:09:02.628 14:03:19 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.628 14:03:19 version -- common/autotest_common.sh@10 -- # set +x 00:09:02.628 ************************************ 00:09:02.628 END TEST version 00:09:02.628 ************************************ 00:09:02.628 14:03:19 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:09:02.628 14:03:19 -- spdk/autotest.sh@202 -- # uname -s 00:09:02.628 14:03:19 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:09:02.628 14:03:19 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:09:02.628 14:03:19 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:09:02.628 14:03:19 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:09:02.628 14:03:19 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:09:02.628 14:03:19 -- spdk/autotest.sh@264 -- # timing_exit lib 00:09:02.628 14:03:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.628 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:09:02.628 14:03:19 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:09:02.628 14:03:19 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:09:02.628 14:03:19 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:09:02.628 14:03:19 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:09:02.628 14:03:19 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:09:02.628 14:03:19 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:09:02.628 14:03:19 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:02.628 14:03:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.628 14:03:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.628 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:09:02.888 ************************************ 00:09:02.888 START TEST nvmf_tcp 00:09:02.888 ************************************ 00:09:02.888 14:03:19 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:02.888 * Looking for test storage... 00:09:02.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:02.888 14:03:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:02.888 14:03:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:02.888 14:03:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:02.888 14:03:19 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.888 14:03:19 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.888 14:03:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.888 ************************************ 00:09:02.888 START TEST nvmf_target_core 00:09:02.888 ************************************ 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:02.888 * Looking for test storage... 00:09:02.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.888 ************************************ 00:09:02.888 START TEST nvmf_abort 00:09:02.888 ************************************ 00:09:02.888 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:03.149 * Looking for test storage... 00:09:03.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.149 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.150 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:05.706 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:05.706 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.706 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:05.707 Found net devices under 0000:84:00.0: cvl_0_0 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:05.707 Found net devices under 0000:84:00.1: cvl_0_1 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:09:05.707 00:09:05.707 --- 10.0.0.2 ping statistics --- 00:09:05.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.707 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:09:05.707 00:09:05.707 --- 10.0.0.1 ping statistics --- 00:09:05.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.707 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2413986 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2413986 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2413986 ']' 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.707 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.707 [2024-07-26 14:03:22.563264] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:09:05.707 [2024-07-26 14:03:22.563370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.966 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.966 [2024-07-26 14:03:22.653060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.966 [2024-07-26 14:03:22.795089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.966 [2024-07-26 14:03:22.795162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.966 [2024-07-26 14:03:22.795181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.966 [2024-07-26 14:03:22.795198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.966 [2024-07-26 14:03:22.795211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.966 [2024-07-26 14:03:22.795316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.966 [2024-07-26 14:03:22.795380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.966 [2024-07-26 14:03:22.795385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.226 [2024-07-26 14:03:22.967503] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.226 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.226 Malloc0 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.226 Delay0 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.226 [2024-07-26 14:03:23.046851] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.226 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:06.226 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.493 [2024-07-26 14:03:23.131978] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:08.393 Initializing NVMe Controllers 00:09:08.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:08.393 controller IO queue size 128 less than required 00:09:08.393 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:08.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:08.393 Initialization complete. Launching workers. 00:09:08.393 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29638 00:09:08.393 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29699, failed to submit 62 00:09:08.393 success 29642, unsuccess 57, failed 0 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.393 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.393 rmmod nvme_tcp 00:09:08.393 rmmod nvme_fabrics 00:09:08.393 rmmod nvme_keyring 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2413986 ']' 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2413986 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2413986 ']' 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2413986 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2413986 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2413986' 00:09:08.652 killing process with pid 2413986 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2413986 00:09:08.652 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2413986 00:09:08.911 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.911 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.911 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.911 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.911 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.911 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.911 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.911 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:11.447 00:09:11.447 real 0m8.003s 00:09:11.447 user 0m10.908s 00:09:11.447 sys 0m3.091s 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:11.447 ************************************ 00:09:11.447 END TEST nvmf_abort 00:09:11.447 ************************************ 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.447 ************************************ 00:09:11.447 START TEST nvmf_ns_hotplug_stress 00:09:11.447 ************************************ 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:11.447 * Looking for test storage... 00:09:11.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.447 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:11.448 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.978 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.978 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:13.978 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:13.978 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:13.978 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:13.979 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:13.979 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:13.979 Found net devices under 0000:84:00.0: cvl_0_0 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:13.979 Found net devices under 0000:84:00.1: cvl_0_1 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:13.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:09:13.979 00:09:13.979 --- 10.0.0.2 ping statistics --- 00:09:13.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.979 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:09:13.979 00:09:13.979 --- 10.0.0.1 ping statistics --- 00:09:13.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.979 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.979 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2416352 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2416352 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2416352 ']' 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.980 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.980 [2024-07-26 14:03:30.807933] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:09:13.980 [2024-07-26 14:03:30.808040] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.980 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.239 [2024-07-26 14:03:30.899384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.239 [2024-07-26 14:03:31.038362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.239 [2024-07-26 14:03:31.038436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.239 [2024-07-26 14:03:31.038469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.239 [2024-07-26 14:03:31.038499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.239 [2024-07-26 14:03:31.038511] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.239 [2024-07-26 14:03:31.038616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.239 [2024-07-26 14:03:31.038656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.239 [2024-07-26 14:03:31.038659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.532 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.532 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:09:14.532 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.532 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.532 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.532 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.532 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:14.532 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:14.791 [2024-07-26 14:03:31.487659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.791 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:15.357 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.923 [2024-07-26 14:03:32.608136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.923 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.489 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:17.056 Malloc0 00:09:17.056 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:17.623 Delay0 00:09:17.623 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.881 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:18.447 NULL1 00:09:18.447 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:18.705 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2416914 00:09:18.705 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:18.705 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:18.705 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.968 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.916 Read completed with error (sct=0, sc=11) 00:09:19.916 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.433 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:20.433 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:20.691 true 00:09:20.691 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:20.691 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.258 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.775 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:21.775 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:22.033 true 00:09:22.033 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:22.033 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.599 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.165 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:23.165 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:23.423 true 00:09:23.423 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:23.423 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.797 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.361 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:25.361 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:25.617 true 00:09:25.617 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:25.617 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.986 14:03:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.551 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:27.551 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:27.808 true 00:09:27.808 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:27.808 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.179 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.437 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:29.437 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:30.001 true 00:09:30.001 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:30.001 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.565 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.822 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:30.822 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:31.079 true 00:09:31.079 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:31.079 14:03:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.011 14:03:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.267 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:32.267 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:32.525 true 00:09:32.525 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:32.525 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.090 14:03:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.362 [2024-07-26 14:03:50.104585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.104727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.104791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.104861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.104924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.104986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.105940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.106005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.106070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.106134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.106205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.362 [2024-07-26 14:03:50.106273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.106970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.107957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.108882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.109969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.110036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.110107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.110176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.110753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.110820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.110880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.110949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.111982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.112950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.113984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.114976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.115981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.116960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.117997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.118064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.118123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.118189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.118255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.118906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.118975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.363 [2024-07-26 14:03:50.119684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.119743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.119812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.119877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.119942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.120990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.121997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.122986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.123998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.124066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.124132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.124200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.124269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.124333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.124399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.125946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.126973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.127936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.128977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 14:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:33.364 [2024-07-26 14:03:50.129832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.129966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 14:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:33.364 [2024-07-26 14:03:50.130041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:33.364 [2024-07-26 14:03:50.130380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.130955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.131991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.132055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.132123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.132190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.132256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.132329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.132393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.364 [2024-07-26 14:03:50.132465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.132530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.132596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.132662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.132725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.132801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.132866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.132934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.133821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.134745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.134816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.134878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.134940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.135934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.136945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.137971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.138935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.139937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.140005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.140080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.140161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.140225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.140293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.140800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.140873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.140942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.141957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.142934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.143997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.144054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.144117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.144178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.144254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.144319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.144388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.365 [2024-07-26 14:03:50.144467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.144534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.144598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.144654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.144714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.144785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.144849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.144916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.144982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.145977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.146961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.147980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.148047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.148118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.148180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.148247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.148319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.148959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.149991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.150961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.151965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.152948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.153989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.154057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.154124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.154194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.154265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.154331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.154398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.154480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.155992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.156981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.157992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.158059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.366 [2024-07-26 14:03:50.158123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.158957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.159970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.160950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.161941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.162964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.163889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.164813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.164881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.164941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.165938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.166966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.167980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.168948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.169975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.170041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.170105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.170172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.170241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.367 [2024-07-26 14:03:50.170316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.170946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.171944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.172968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.173031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.173096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.173162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.173230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.173291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.173358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.173422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.174992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.175961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.176980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.177933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.178940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.179860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.180987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.181953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.182962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.368 [2024-07-26 14:03:50.183678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.183737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.183798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.183860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.183931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.183996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.184938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.185963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.186926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.187961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.188980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.189928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.190969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.191957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.192933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.193975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.369 [2024-07-26 14:03:50.194899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.194964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.195974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.196986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.197964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.198973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.199982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.200997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.201963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.370 [2024-07-26 14:03:50.202827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.202890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.202965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.203995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.204063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.204126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.204191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.204254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.204317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.204377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.204444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.204503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:33.371 [2024-07-26 14:03:50.205420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.205499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.205570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.205637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.205704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.205769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.205849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.205919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.205984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.206987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.207965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.208949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.209964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.210946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.211458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.211541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.211620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.211684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.211749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.211814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.211879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.211942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.371 [2024-07-26 14:03:50.212594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.212661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.212730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.212803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.212870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.212938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.213971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.214945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.215949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.216985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.217945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.218997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.219948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.220023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.220093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.220157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.221986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.222046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.222107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.222170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.372 [2024-07-26 14:03:50.222248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.222936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.223958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.224979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.225963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.226029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.226101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.226166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.226232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.226300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.226364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.226444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.227956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.228989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.229936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.230958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.231973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.232037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.232109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.232172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.232238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.373 [2024-07-26 14:03:50.232302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.232944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.233959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.234024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.234092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.234161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.234225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.234292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.234366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.234441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.234996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.374 [2024-07-26 14:03:50.235069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.235953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.656 [2024-07-26 14:03:50.236902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.236966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.237947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.238962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.239941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.240005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.240069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.240130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.240185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.240246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.240328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.240392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.241951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.242975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.657 [2024-07-26 14:03:50.243981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.244996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.245944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.246966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.247957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.248978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.249799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.250722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.250792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.250925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.250997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.251061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.251128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.251196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.251262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.658 [2024-07-26 14:03:50.251323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.251970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.252945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.253977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.254938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.255946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.256973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.257984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.258054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.258116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.659 [2024-07-26 14:03:50.258182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.258999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.259995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.260871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.261980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.262974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.263991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.264052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.264116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.264181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.264242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.264305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.264368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.264438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.660 [2024-07-26 14:03:50.264500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.264561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.264622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.264686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.264759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.264826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.264890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.264955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.265019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.265094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.265163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.265227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.265291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.265357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.266969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.267975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.268962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.269967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.270042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.270107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.270174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.270238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.270303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.270368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.270447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.661 [2024-07-26 14:03:50.270514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.270740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.270807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.270873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.270939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.271700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.272992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.273996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.274949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.275969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.276951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.277016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.277080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.277153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.277223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.277293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.277364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.277439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.662 [2024-07-26 14:03:50.277510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.277575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.277640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.277706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.277772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.277838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.277908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.277973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.278997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.279638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:33.663 [2024-07-26 14:03:50.280529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.280601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.280666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.280733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.280799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.280865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.280939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.281946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.282983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.283950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.284023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.284090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.284157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.284221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.284294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.284367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.663 [2024-07-26 14:03:50.284437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.284502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.284573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.284640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.284706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.284934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.285963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.286973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.287945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.288999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.289942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.290973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.291037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.291101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.291163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.291226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.291286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.664 [2024-07-26 14:03:50.291344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.291948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.292979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.293803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.294689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.294764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.294838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.294900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.294966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.295953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.296961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.297978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.298037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.298100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.298158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.298218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.298286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.665 [2024-07-26 14:03:50.298353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.298413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.298491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.298552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.298612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.298671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.298734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.298796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.299937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.300952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.301024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.301095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.301164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.301711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.301784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.301855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.301911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.301974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.302989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.303987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.304060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.304133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.304195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.304261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.666 [2024-07-26 14:03:50.304326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.304984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.305817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.306956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.307967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.308033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.308692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.308765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.308826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.308887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.308942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.309983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.310990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.311056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.311121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.311187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.667 [2024-07-26 14:03:50.311262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.311966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.312794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.313984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.314995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.315058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.315122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.315186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.316958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.317947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.318013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.318080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.318147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.318212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.318277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.318343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.668 [2024-07-26 14:03:50.318409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.318480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.318555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.318628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.318695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.318768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.318841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.318914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.318979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.319986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.320976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.321943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.322025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.322095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.322159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.322225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.322291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.322356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.322437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.322958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.323983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.324993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.325059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.325123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.325188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.669 [2024-07-26 14:03:50.325252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.325980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.326948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.327967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.328945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.329012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.329078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.329143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.329208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.329284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.329355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.329422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.329504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.330972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.331035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.331104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.331170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.331233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.331296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.331355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.670 [2024-07-26 14:03:50.331420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.331497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.331563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.331629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.331696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.331760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.331823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.331890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.331958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.332964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.333970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.334979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.335959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.336023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.336087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.336152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.336217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.336281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.336355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.336424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.336499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.337944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.671 [2024-07-26 14:03:50.338616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.338677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.338740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.338804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.338863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.338930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.338996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.339999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.340968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.341975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.342980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.343977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.672 [2024-07-26 14:03:50.344952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.345831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.346771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.346856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.346923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.346992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.347949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.348968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.349985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.350930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.351985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.352055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.352115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.352179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.673 [2024-07-26 14:03:50.352246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.352724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.352801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.352858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.352919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.352985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.353953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.354981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.355938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.356958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:33.674 [2024-07-26 14:03:50.357330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.357955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.674 [2024-07-26 14:03:50.358972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.359961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.360027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.360095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.360168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.361960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.362971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.363951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.364989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.365053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.675 [2024-07-26 14:03:50.365117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.365982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.366972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.367602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.368927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.369992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.370947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.371953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.372016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.372084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.372150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.372218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.676 [2024-07-26 14:03:50.372279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.372333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.372571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.372638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.372700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.372774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.372837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.372901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.372966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.373973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.374038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.374109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.374182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.374246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.374315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.374380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.374452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.375992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.376951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.377940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.378970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.379987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.380048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.677 [2024-07-26 14:03:50.380109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.380942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.381975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.382965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.383790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.384700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.384769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.384830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.384893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.384961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.385981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.386996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.678 [2024-07-26 14:03:50.387665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.387730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.387801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.387862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.387924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.387999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.388902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.389964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.390033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.390101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.390175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.390247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.390748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.390819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.390884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.390949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.391929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.392974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.393997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.394918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.395146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.679 [2024-07-26 14:03:50.395212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.395981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.396947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.397957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.398034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.398097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.398160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.399956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.400972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.680 [2024-07-26 14:03:50.401897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.401964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.402999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.403987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.404051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.404113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.404184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.404257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.404324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.404393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.404465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.404536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.405965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.406979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.407930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.408945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.681 [2024-07-26 14:03:50.409700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.409768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.409845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.409911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.409978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.410989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.411997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.412059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.412125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.412192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.412257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.412313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.412374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.413998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.414950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.415977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.416958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.417951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.682 [2024-07-26 14:03:50.418017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.418843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.419975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.420996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.421964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.422978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.423961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.424930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.683 [2024-07-26 14:03:50.425917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.425988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.426736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.427611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.427685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.427766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.427837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.427902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.427966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.428959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.429979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.430955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.431832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.432059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.432126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.432191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.432254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:33.684 [2024-07-26 14:03:50.432322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.684 [2024-07-26 14:03:50.432386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.432968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.433955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.434020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.434085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.434150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.434213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.435985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.436998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.437950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.438965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.685 [2024-07-26 14:03:50.439978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.440970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.441035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.441099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.441160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.441225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.441288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.441814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.441895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.441960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.442968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.443965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.444934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.445942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.686 [2024-07-26 14:03:50.446855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.446921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.446984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.447939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.448013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.448076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.448141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.448204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.448266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.448339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.448398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.449934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.450958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.451954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.452994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.453961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.454023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.454091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.687 [2024-07-26 14:03:50.454155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.454986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.455938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.456976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.457890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.458816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.458891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.458961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.459984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.460048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.460116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.460185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.460252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.688 [2024-07-26 14:03:50.460318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.460975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.461941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.462960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.463936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.464003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.464074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.464136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.464203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.464264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.464879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.464946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.465971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.466938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.467002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.467066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.467131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.467195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.689 [2024-07-26 14:03:50.467258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.467962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.468954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.469952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.470961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.471996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.472063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.472126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.472191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.472257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.472321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.473955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.474019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.474090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.474146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.474212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.474276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.474343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.690 [2024-07-26 14:03:50.474405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.474987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.475945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.476959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.477991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.478965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.479807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.480501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.480570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.480632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.480702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.480767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.480841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.480905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.480966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.481027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.481093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.481156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.481221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.481287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.691 [2024-07-26 14:03:50.481349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.481952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.482996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.483952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.484964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.485949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.486960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.692 [2024-07-26 14:03:50.487872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.487942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.488003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.488807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.488878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.488947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.489990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.490971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.491997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.492960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.493972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.494033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.693 [2024-07-26 14:03:50.494094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.494158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.494223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.494287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.494898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.494969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.495962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.496968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.497989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.498975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.499966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.500945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.501010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.501072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.501135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.501200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.501273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.501336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.694 [2024-07-26 14:03:50.501402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.501985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.502046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.502111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.502178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.502250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.502311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.502949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.503955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.504956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.505998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.506985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:33.695 [2024-07-26 14:03:50.507423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.507982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.508046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.508119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.508186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.508266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.508339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.508404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.508481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.509294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.509369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.509440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.509511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.509573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.695 [2024-07-26 14:03:50.509630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.509690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.509753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.509818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.509881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.509953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.510985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.511944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.512953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.513988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.514949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.515971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.696 [2024-07-26 14:03:50.516548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.516615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.516679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.516741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.516804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.516872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.516941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.516999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.517817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.518813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.518893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.518963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.697 [2024-07-26 14:03:50.519028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.519955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.979 [2024-07-26 14:03:50.520639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.520698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.520757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.520817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.520886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.520949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.521982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.522959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.523938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.524928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 true 00:09:33.980 [2024-07-26 14:03:50.525574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.525941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.526981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.527056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.527120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.527192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.980 [2024-07-26 14:03:50.527264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.527937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.528993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.529968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.530998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.531931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.532000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.532068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.532129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.532983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.533941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.534013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.534079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.534145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.534210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.981 [2024-07-26 14:03:50.534278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.534979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.535944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.536956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.537956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.538972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.539033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.539100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.539162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.539234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.539295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.539363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.539426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.982 [2024-07-26 14:03:50.540991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.541943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 14:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:33.983 [2024-07-26 14:03:50.542785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.542914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 14:03:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.983 [2024-07-26 14:03:50.542979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.543982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.544996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.545943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.546989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.983 [2024-07-26 14:03:50.547057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.547118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.547188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.547255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.547322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.548966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.549938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.550946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.551960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.552020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.552082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.552141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.552200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.984 [2024-07-26 14:03:50.552261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.552988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.553978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.554541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.555951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.556939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.557979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.558981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.985 [2024-07-26 14:03:50.559040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.559984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.560942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.561983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.562594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.563962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.564971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.565988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.986 [2024-07-26 14:03:50.566067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.566995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.567950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.568772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.569507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.569577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.569649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.569710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.569772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.569837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.569900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.569960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.570938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.571961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.572950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.573022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.573092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.573157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.573225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.987 [2024-07-26 14:03:50.573293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.573374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.573458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.573533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.573601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.573666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.573732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.573796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.573859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.574958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.575954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.576964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.577027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.577093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.577723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.577792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.577859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.577924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.577992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.578983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.579928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.580000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.580068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.580137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.580212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.580278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.988 [2024-07-26 14:03:50.580347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.580982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.581952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:33.989 [2024-07-26 14:03:50.582177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.582947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.583012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.583090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.583164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.583229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.583294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.584924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.585988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.586049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.586118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.586180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.586258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.586327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.586393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.586468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.989 [2024-07-26 14:03:50.586535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.586599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.586665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.586732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.586797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.586865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.586934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.586999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.587924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.588963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.589943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.590970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.591967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.592811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.593771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.593841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.990 [2024-07-26 14:03:50.593905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.593966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.594939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.595975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.596953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.597948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.598994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.599977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.600051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.600118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.600185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.600250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.600317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.600389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.991 [2024-07-26 14:03:50.600476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.600544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.600609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.600674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.600748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.600819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.600884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.600952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.601989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.602048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.602114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.602174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.602236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.602305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.602366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.602441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.602507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.603995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.604964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.605977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.606972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.992 [2024-07-26 14:03:50.607679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.607746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.607962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.608999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.609949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.610017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.610085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.610156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.610868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.610939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.611976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.612944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.613941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.993 [2024-07-26 14:03:50.614865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.614929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.614995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.615961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.616994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.617979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.618047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.618113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.618179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.618257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.618326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.618396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.619984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.994 [2024-07-26 14:03:50.620937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.621942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.622970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.623990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.624961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.625795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.626968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.995 [2024-07-26 14:03:50.627832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.627896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.627964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.628946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.629980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.630955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.631941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.632738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.633583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.633656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.633727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.633794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.633860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.633926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.633999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.634954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.635017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.996 [2024-07-26 14:03:50.635082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.635985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.636940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.637836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.638935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.639991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.640050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.640112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.640176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.640242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.641951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.642017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.642091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.642164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.642231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.997 [2024-07-26 14:03:50.642302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.642985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.643940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.644941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.645931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.646962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.647028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.647096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.647159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.647236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.647296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.647366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.647444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.648083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.648151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.998 [2024-07-26 14:03:50.648216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.648943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.649987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.650973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.651941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.652998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.653960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.999 [2024-07-26 14:03:50.654704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.654773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 Message suppressed 999 times: [2024-07-26 14:03:50.655352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 Read completed with error (sct=0, sc=15) 00:09:34.000 [2024-07-26 14:03:50.655419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.655999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.656984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.657940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.658982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.659987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.660960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.000 [2024-07-26 14:03:50.661589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.661660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.662490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.662566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.662632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.662707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.662776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.662846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.662907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.662981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.663975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.664980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.665982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.666947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.667960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.001 [2024-07-26 14:03:50.668769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.668837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.668907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.668977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.669980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.670945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.671009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.671073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.671991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.672971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.673957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.674992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.002 [2024-07-26 14:03:50.675853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.675919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.675987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.676965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.677953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.678998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.679955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.680982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.681053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.681109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.681174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.681234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.681299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.681363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.681425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.003 [2024-07-26 14:03:50.681495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.681563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.681632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.681703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.681766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.681828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.681889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.681952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.682945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.683963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.684995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.685052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.685109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.685172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.685240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.685304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.685377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.686948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.687963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.004 [2024-07-26 14:03:50.688646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.688712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.688782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.688855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.688919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.688976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.689988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.690933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.691694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.692980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.693944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.694989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.695059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.695126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.695195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.695260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.695325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.695391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.005 [2024-07-26 14:03:50.695463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.695536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.695600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.695665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.695731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.695802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.695871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.695946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.696978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.697943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.698974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.699981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.700685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.701606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.701675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.701742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.701813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.701887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.701953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.006 [2024-07-26 14:03:50.702695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.702757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.702822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.702889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.702956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.703974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.704945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.705783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.706934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.707951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.708988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.709054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.709123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.709196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.709263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.709327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.709395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.007 [2024-07-26 14:03:50.709467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.709533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.709608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.709673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.709736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.709801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.709867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.709935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.710946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.711776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.712942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.713968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.008 [2024-07-26 14:03:50.714897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.714953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.715020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.715976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.716980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.717997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.718966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.719985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.720978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.721044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.721110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.721183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.721253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.721317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.721382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.721455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.009 [2024-07-26 14:03:50.722687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.722750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.722805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.722864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.722931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.722993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.723979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.724993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.725946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.726940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.727942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.010 [2024-07-26 14:03:50.728943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.729987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.730058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.730122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.730190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.730257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.730319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.730389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.730461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:34.011 [2024-07-26 14:03:50.731361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.731992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.732976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.733959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.734971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.735965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.736027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.011 [2024-07-26 14:03:50.736089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.736796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.737942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.738961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.739968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.740940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.741973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.012 [2024-07-26 14:03:50.742914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.742978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.743970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.744683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.745550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.745620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.745693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.745767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.745832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.745899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.745965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.746975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.747963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.013 [2024-07-26 14:03:50.748836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.748910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.748978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.749795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.750962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.751990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.752052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.752111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.752175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.752709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.752779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.752850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.752920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.752989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.753956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.754971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.014 [2024-07-26 14:03:50.755750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.755812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.755874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.755935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.755995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.756938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.757994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.758971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.759034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.759970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.760930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.761984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.762979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.763043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.015 [2024-07-26 14:03:50.763106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.763991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.764970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.765938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.766643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.767937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.768997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.016 [2024-07-26 14:03:50.769872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.769935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.770946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.771999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.772959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.773024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.773090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.773156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.773224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.773297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.773361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.773426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.774964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.775991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.776054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.776118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.776183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.776248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.776315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.776385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.017 [2024-07-26 14:03:50.776457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.776524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.776592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.776659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.776738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.776812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.776878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.776945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.777958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.778956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.779951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.780976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.781928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.782495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.782564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.782628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.782694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.782755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.782813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.782880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.782944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.018 [2024-07-26 14:03:50.783008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.783965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.784946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.785937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.786937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.787609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.788953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.789966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.790037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.019 [2024-07-26 14:03:50.790094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.790948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.791954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.792984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.793946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.794938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.795963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.020 [2024-07-26 14:03:50.796863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.796930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.796998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.797949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.798609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.799990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.800942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.801964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.802983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.803778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 [2024-07-26 14:03:50.804298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.021 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:34.022 [2024-07-26 14:03:50.804369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.804978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.805985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.806978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.807929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.808945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.809970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.810055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.810129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.022 [2024-07-26 14:03:50.810199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.810945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.811964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.812998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.813066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.813133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.813198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.813261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:34.023 [2024-07-26 14:03:50.813562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:35.397 14:03:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.397 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:35.397 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:35.962 true 00:09:35.962 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:35.962 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.535 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.793 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:36.793 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:37.360 true 00:09:37.360 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:37.360 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.926 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:38.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:38.442 true 00:09:38.442 14:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:38.442 14:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.375 14:03:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.633 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:39.633 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:40.198 true 00:09:40.198 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:40.198 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.572 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.088 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:42.088 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:42.346 true 00:09:42.346 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:42.346 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.912 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.428 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:43.428 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:43.686 true 00:09:43.686 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:43.686 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.252 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.768 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:44.768 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:45.025 true 00:09:45.025 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:45.025 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.590 14:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.106 14:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:46.106 14:04:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:46.364 true 00:09:46.364 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:46.364 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.929 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.494 14:04:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:47.494 14:04:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:47.751 true 00:09:47.751 14:04:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:47.751 14:04:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.124 Initializing NVMe Controllers 00:09:49.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:49.124 Controller IO queue size 128, less than required. 00:09:49.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:49.124 Controller IO queue size 128, less than required. 00:09:49.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:49.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:49.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:49.124 Initialization complete. Launching workers. 00:09:49.124 ======================================================== 00:09:49.124 Latency(us) 00:09:49.124 Device Information : IOPS MiB/s Average min max 00:09:49.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4113.07 2.01 24429.12 2046.64 1134393.05 00:09:49.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14209.83 6.94 9008.01 3380.15 411579.56 00:09:49.124 ======================================================== 00:09:49.124 Total : 18322.90 8.95 12469.69 2046.64 1134393.05 00:09:49.124 00:09:49.124 14:04:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.690 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:49.690 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:49.948 true 00:09:50.207 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2416914 00:09:50.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2416914) - No such process 00:09:50.207 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2416914 00:09:50.207 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.774 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.032 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:51.032 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:51.032 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:51.032 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:51.032 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:51.598 null0 00:09:51.598 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:51.598 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:51.598 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:52.165 null1 00:09:52.165 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.165 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.165 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:52.423 null2 00:09:52.423 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.423 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.423 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:52.991 null3 00:09:52.991 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:52.991 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:52.991 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:53.557 null4 00:09:53.557 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:53.557 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:53.557 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:53.816 null5 00:09:53.816 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:53.816 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:53.816 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:54.075 null6 00:09:54.076 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.076 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.076 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:54.643 null7 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:54.643 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2421203 2421204 2421206 2421208 2421210 2421212 2421214 2421216 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.644 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:54.902 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:54.902 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.902 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.902 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:54.902 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:54.902 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.902 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:54.902 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.162 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:55.421 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.421 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:55.421 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:55.421 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:55.421 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:55.421 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:55.421 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:55.421 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.680 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:55.939 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.198 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:56.198 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.198 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.456 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.738 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:56.997 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.256 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:57.256 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:57.256 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:57.256 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:57.256 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.515 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:57.774 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.032 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:58.290 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.290 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.290 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:58.290 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.290 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.290 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:58.290 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.291 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.291 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:58.291 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.291 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.291 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:58.291 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:58.291 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:58.291 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.549 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:58.549 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:58.549 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.549 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:58.549 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:58.550 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.550 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.550 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:58.550 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.550 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.550 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.808 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:59.067 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.325 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:59.325 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.325 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.325 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.325 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:59.581 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.838 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:00.096 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:00.355 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.355 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.355 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.355 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.355 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.355 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.355 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.614 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:00.871 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.129 rmmod nvme_tcp 00:10:01.129 rmmod nvme_fabrics 00:10:01.129 rmmod nvme_keyring 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2416352 ']' 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2416352 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2416352 ']' 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2416352 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2416352 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2416352' 00:10:01.129 killing process with pid 2416352 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2416352 00:10:01.129 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2416352 00:10:01.388 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.388 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.388 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.388 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.388 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.388 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.388 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.388 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:03.949 00:10:03.949 real 0m52.473s 00:10:03.949 user 3m56.640s 00:10:03.949 sys 0m18.631s 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.949 ************************************ 00:10:03.949 END TEST nvmf_ns_hotplug_stress 00:10:03.949 ************************************ 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.949 ************************************ 00:10:03.949 START TEST nvmf_delete_subsystem 00:10:03.949 ************************************ 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:03.949 * Looking for test storage... 00:10:03.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:03.949 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.950 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.478 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:06.479 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:06.479 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:06.479 Found net devices under 0000:84:00.0: cvl_0_0 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:06.479 Found net devices under 0000:84:00.1: cvl_0_1 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:10:06.479 00:10:06.479 --- 10.0.0.2 ping statistics --- 00:10:06.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.479 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:10:06.479 00:10:06.479 --- 10.0.0.1 ping statistics --- 00:10:06.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.479 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2424119 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2424119 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2424119 ']' 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.479 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.480 [2024-07-26 14:04:23.293948] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:10:06.480 [2024-07-26 14:04:23.294046] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.480 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.737 [2024-07-26 14:04:23.372409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.737 [2024-07-26 14:04:23.493302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.737 [2024-07-26 14:04:23.493371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.737 [2024-07-26 14:04:23.493388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.737 [2024-07-26 14:04:23.493402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.737 [2024-07-26 14:04:23.493413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.737 [2024-07-26 14:04:23.493506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.737 [2024-07-26 14:04:23.493513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.737 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.737 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:10:06.737 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.737 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.737 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.995 [2024-07-26 14:04:23.640405] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.995 [2024-07-26 14:04:23.656646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.995 NULL1 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.995 Delay0 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2424238 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:06.995 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:06.995 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.995 [2024-07-26 14:04:23.741426] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:08.903 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.903 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.903 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.162 Read completed with error (sct=0, sc=8) 00:10:09.162 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 [2024-07-26 14:04:25.884552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b943e0 is same with the state(5) to be set 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 starting I/O failed: -6 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 [2024-07-26 14:04:25.885876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f85c800d330 is same with the state(5) to be set 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.163 Write completed with error (sct=0, sc=8) 00:10:09.163 Read completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 Read completed with error (sct=0, sc=8) 00:10:09.164 Read completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 Read completed with error (sct=0, sc=8) 00:10:09.164 Read completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 Read completed with error (sct=0, sc=8) 00:10:09.164 Read completed with error (sct=0, sc=8) 00:10:09.164 Read completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 Write completed with error (sct=0, sc=8) 00:10:09.164 [2024-07-26 14:04:25.886438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b948f0 is same with the state(5) to be set 00:10:10.102 [2024-07-26 14:04:26.838775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95ac0 is same with the state(5) to be set 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 [2024-07-26 14:04:26.888264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f85c800d660 is same with the state(5) to be set 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 [2024-07-26 14:04:26.888544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b945c0 is same with the state(5) to be set 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 [2024-07-26 14:04:26.889317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94c20 is same with the state(5) to be set 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Write completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.102 Read completed with error (sct=0, sc=8) 00:10:10.103 Read completed with error (sct=0, sc=8) 00:10:10.103 Write completed with error (sct=0, sc=8) 00:10:10.103 Write completed with error (sct=0, sc=8) 00:10:10.103 Write completed with error (sct=0, sc=8) 00:10:10.103 Write completed with error (sct=0, sc=8) 00:10:10.103 Read completed with error (sct=0, sc=8) 00:10:10.103 Write completed with error (sct=0, sc=8) 00:10:10.103 Read completed with error (sct=0, sc=8) 00:10:10.103 Read completed with error (sct=0, sc=8) 00:10:10.103 Read completed with error (sct=0, sc=8) 00:10:10.103 Read completed with error (sct=0, sc=8) 00:10:10.103 Write completed with error (sct=0, sc=8) 00:10:10.103 Write completed with error (sct=0, sc=8) 00:10:10.103 Read completed with error (sct=0, sc=8) 00:10:10.103 [2024-07-26 14:04:26.889568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f85c800d000 is same with the state(5) to be set 00:10:10.103 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.103 Initializing NVMe Controllers 00:10:10.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:10.103 Controller IO queue size 128, less than required. 00:10:10.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:10.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:10.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:10.103 Initialization complete. Launching workers. 00:10:10.103 ======================================================== 00:10:10.103 Latency(us) 00:10:10.103 Device Information : IOPS MiB/s Average min max 00:10:10.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.80 0.08 912253.66 1906.68 2003081.12 00:10:10.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.83 0.08 935004.44 604.53 2003333.28 00:10:10.103 ======================================================== 00:10:10.103 Total : 323.64 0.16 923489.48 604.53 2003333.28 00:10:10.103 00:10:10.103 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:10.103 [2024-07-26 14:04:26.890727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b95ac0 (9): Bad file descriptor 00:10:10.103 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2424238 00:10:10.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:10.103 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2424238 00:10:10.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2424238) - No such process 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2424238 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2424238 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2424238 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.673 [2024-07-26 14:04:27.411651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2424670 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2424670 00:10:10.673 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:10.673 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.673 [2024-07-26 14:04:27.477379] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:11.243 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:11.243 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2424670 00:10:11.243 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:11.811 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:11.811 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2424670 00:10:11.811 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:12.070 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:12.070 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2424670 00:10:12.070 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:12.639 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:12.639 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2424670 00:10:12.639 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:13.207 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:13.207 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2424670 00:10:13.207 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:13.776 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:13.776 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2424670 00:10:13.776 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:14.035 Initializing NVMe Controllers 00:10:14.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:14.035 Controller IO queue size 128, less than required. 00:10:14.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:14.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:14.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:14.035 Initialization complete. Launching workers. 00:10:14.035 ======================================================== 00:10:14.035 Latency(us) 00:10:14.035 Device Information : IOPS MiB/s Average min max 00:10:14.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004925.48 1000257.07 1040862.48 00:10:14.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004878.26 1000270.70 1014018.57 00:10:14.035 ======================================================== 00:10:14.035 Total : 256.00 0.12 1004901.87 1000257.07 1040862.48 00:10:14.035 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2424670 00:10:14.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2424670) - No such process 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2424670 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:14.295 rmmod nvme_tcp 00:10:14.295 rmmod nvme_fabrics 00:10:14.295 rmmod nvme_keyring 00:10:14.295 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:14.295 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:14.295 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:14.295 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2424119 ']' 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2424119 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2424119 ']' 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2424119 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2424119 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2424119' 00:10:14.296 killing process with pid 2424119 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2424119 00:10:14.296 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2424119 00:10:14.554 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:14.554 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:14.554 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:14.554 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.554 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:14.554 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.554 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.554 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:17.094 00:10:17.094 real 0m13.070s 00:10:17.094 user 0m28.114s 00:10:17.094 sys 0m3.585s 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.094 ************************************ 00:10:17.094 END TEST nvmf_delete_subsystem 00:10:17.094 ************************************ 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.094 ************************************ 00:10:17.094 START TEST nvmf_host_management 00:10:17.094 ************************************ 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:17.094 * Looking for test storage... 00:10:17.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.094 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:10:17.095 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:19.631 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:19.631 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:19.631 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:19.632 Found net devices under 0000:84:00.0: cvl_0_0 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:19.632 Found net devices under 0000:84:00.1: cvl_0_1 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:19.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:10:19.632 00:10:19.632 --- 10.0.0.2 ping statistics --- 00:10:19.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.632 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:10:19.632 00:10:19.632 --- 10.0.0.1 ping statistics --- 00:10:19.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.632 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2427039 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2427039 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2427039 ']' 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.632 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.632 [2024-07-26 14:04:36.320063] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:10:19.632 [2024-07-26 14:04:36.320148] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.632 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.632 [2024-07-26 14:04:36.398011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.892 [2024-07-26 14:04:36.524166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.892 [2024-07-26 14:04:36.524233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.892 [2024-07-26 14:04:36.524250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.892 [2024-07-26 14:04:36.524264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.892 [2024-07-26 14:04:36.524275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.892 [2024-07-26 14:04:36.524361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.892 [2024-07-26 14:04:36.524417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.892 [2024-07-26 14:04:36.524466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:19.892 [2024-07-26 14:04:36.524470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.892 [2024-07-26 14:04:36.706865] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.892 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.892 Malloc0 00:10:19.892 [2024-07-26 14:04:36.777373] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2427199 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2427199 /var/tmp/bdevperf.sock 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2427199 ']' 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:20.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:20.151 { 00:10:20.151 "params": { 00:10:20.151 "name": "Nvme$subsystem", 00:10:20.151 "trtype": "$TEST_TRANSPORT", 00:10:20.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.151 "adrfam": "ipv4", 00:10:20.151 "trsvcid": "$NVMF_PORT", 00:10:20.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.151 "hdgst": ${hdgst:-false}, 00:10:20.151 "ddgst": ${ddgst:-false} 00:10:20.151 }, 00:10:20.151 "method": "bdev_nvme_attach_controller" 00:10:20.151 } 00:10:20.151 EOF 00:10:20.151 )") 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:20.151 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:20.151 "params": { 00:10:20.151 "name": "Nvme0", 00:10:20.151 "trtype": "tcp", 00:10:20.151 "traddr": "10.0.0.2", 00:10:20.151 "adrfam": "ipv4", 00:10:20.151 "trsvcid": "4420", 00:10:20.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:20.151 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:20.151 "hdgst": false, 00:10:20.151 "ddgst": false 00:10:20.151 }, 00:10:20.151 "method": "bdev_nvme_attach_controller" 00:10:20.151 }' 00:10:20.152 [2024-07-26 14:04:36.865086] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:10:20.152 [2024-07-26 14:04:36.865172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427199 ] 00:10:20.152 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.152 [2024-07-26 14:04:36.935105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.411 [2024-07-26 14:04:37.058260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.411 Running I/O for 10 seconds... 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:20.695 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:20.696 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:20.696 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:20.696 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.696 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.696 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=66 00:10:20.696 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 66 -ge 100 ']' 00:10:20.696 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.968 [2024-07-26 14:04:37.704470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237e3c0 is same with the state(5) to be set 00:10:20.968 [2024-07-26 14:04:37.704601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237e3c0 is same with the state(5) to be set 00:10:20.968 [2024-07-26 14:04:37.704620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237e3c0 is same with the state(5) to be set 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.968 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:20.968 [2024-07-26 14:04:37.719832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.968 [2024-07-26 14:04:37.719876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.719896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.968 [2024-07-26 14:04:37.719912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.719927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.968 [2024-07-26 14:04:37.719942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.719958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.968 [2024-07-26 14:04:37.719972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.719993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c540 is same with the state(5) to be set 00:10:20.968 [2024-07-26 14:04:37.720085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.968 [2024-07-26 14:04:37.720640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.968 [2024-07-26 14:04:37.720656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.720978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.720994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.969 [2024-07-26 14:04:37.721838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.969 [2024-07-26 14:04:37.721854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.721870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.721886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.721904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.721920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.721937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.721952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.721969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.721984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.722001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.722016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.722033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.722049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.722066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.722081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.722101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.722117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.722134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.722150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.722167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.722183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.722200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:20.970 [2024-07-26 14:04:37.722219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.970 [2024-07-26 14:04:37.722324] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x167cd70 was disconnected and freed. reset controller. 00:10:20.970 [2024-07-26 14:04:37.723567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:20.970 task offset: 73728 on job bdev=Nvme0n1 fails 00:10:20.970 00:10:20.970 Latency(us) 00:10:20.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.970 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:20.970 Job: Nvme0n1 ended in about 0.44 seconds with error 00:10:20.970 Verification LBA range: start 0x0 length 0x400 00:10:20.970 Nvme0n1 : 0.44 1307.33 81.71 145.26 0.00 42811.62 3179.71 40389.59 00:10:20.970 =================================================================================================================== 00:10:20.970 Total : 1307.33 81.71 145.26 0.00 42811.62 3179.71 40389.59 00:10:20.970 [2024-07-26 14:04:37.725663] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:20.970 [2024-07-26 14:04:37.725695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126c540 (9): Bad file descriptor 00:10:20.970 [2024-07-26 14:04:37.733857] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2427199 00:10:21.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2427199) - No such process 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:21.906 { 00:10:21.906 "params": { 00:10:21.906 "name": "Nvme$subsystem", 00:10:21.906 "trtype": "$TEST_TRANSPORT", 00:10:21.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.906 "adrfam": "ipv4", 00:10:21.906 "trsvcid": "$NVMF_PORT", 00:10:21.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.906 "hdgst": ${hdgst:-false}, 00:10:21.906 "ddgst": ${ddgst:-false} 00:10:21.906 }, 00:10:21.906 "method": "bdev_nvme_attach_controller" 00:10:21.906 } 00:10:21.906 EOF 00:10:21.906 )") 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:21.906 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:21.906 "params": { 00:10:21.906 "name": "Nvme0", 00:10:21.906 "trtype": "tcp", 00:10:21.906 "traddr": "10.0.0.2", 00:10:21.906 "adrfam": "ipv4", 00:10:21.906 "trsvcid": "4420", 00:10:21.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:21.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:21.906 "hdgst": false, 00:10:21.906 "ddgst": false 00:10:21.906 }, 00:10:21.906 "method": "bdev_nvme_attach_controller" 00:10:21.906 }' 00:10:21.906 [2024-07-26 14:04:38.768894] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:10:21.906 [2024-07-26 14:04:38.768994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427467 ] 00:10:22.165 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.165 [2024-07-26 14:04:38.838316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.165 [2024-07-26 14:04:38.959226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.424 Running I/O for 1 seconds... 00:10:23.360 00:10:23.360 Latency(us) 00:10:23.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.360 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:23.360 Verification LBA range: start 0x0 length 0x400 00:10:23.360 Nvme0n1 : 1.01 1271.73 79.48 0.00 0.00 49519.67 12379.02 49321.91 00:10:23.360 =================================================================================================================== 00:10:23.360 Total : 1271.73 79.48 0.00 0.00 49519.67 12379.02 49321.91 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.929 rmmod nvme_tcp 00:10:23.929 rmmod nvme_fabrics 00:10:23.929 rmmod nvme_keyring 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2427039 ']' 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2427039 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2427039 ']' 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2427039 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2427039 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2427039' 00:10:23.929 killing process with pid 2427039 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2427039 00:10:23.929 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2427039 00:10:24.189 [2024-07-26 14:04:40.935587] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:24.189 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.189 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.189 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.189 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.190 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.190 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.190 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.190 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:26.727 00:10:26.727 real 0m9.563s 00:10:26.727 user 0m20.918s 00:10:26.727 sys 0m3.266s 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:26.727 ************************************ 00:10:26.727 END TEST nvmf_host_management 00:10:26.727 ************************************ 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.727 ************************************ 00:10:26.727 START TEST nvmf_lvol 00:10:26.727 ************************************ 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:26.727 * Looking for test storage... 00:10:26.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.727 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:10:26.728 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:29.262 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.262 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:10:29.262 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:29.262 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:29.262 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:29.263 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:29.263 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:29.263 Found net devices under 0000:84:00.0: cvl_0_0 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:29.263 Found net devices under 0000:84:00.1: cvl_0_1 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:29.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:10:29.263 00:10:29.263 --- 10.0.0.2 ping statistics --- 00:10:29.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.263 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:10:29.263 00:10:29.263 --- 10.0.0.1 ping statistics --- 00:10:29.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.263 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:29.263 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2429701 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2429701 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2429701 ']' 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.264 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:29.264 [2024-07-26 14:04:45.931232] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:10:29.264 [2024-07-26 14:04:45.931316] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.264 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.264 [2024-07-26 14:04:46.001354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.264 [2024-07-26 14:04:46.121741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.264 [2024-07-26 14:04:46.121816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.264 [2024-07-26 14:04:46.121833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.264 [2024-07-26 14:04:46.121847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.264 [2024-07-26 14:04:46.121859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.264 [2024-07-26 14:04:46.121961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.264 [2024-07-26 14:04:46.122018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.264 [2024-07-26 14:04:46.122022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.201 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.201 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:30.201 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:30.201 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.201 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:30.201 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.201 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:30.458 [2024-07-26 14:04:47.206887] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.458 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.717 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:30.717 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.283 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:31.283 14:04:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:31.543 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:32.110 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=49cbe498-8dab-4e1c-8d70-0170f090d30d 00:10:32.110 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 49cbe498-8dab-4e1c-8d70-0170f090d30d lvol 20 00:10:32.368 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9ac10db5-1a4e-4b11-8a43-31ef104e74e1 00:10:32.368 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:32.628 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9ac10db5-1a4e-4b11-8a43-31ef104e74e1 00:10:33.206 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:33.464 [2024-07-26 14:04:50.284878] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.464 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:34.033 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2430270 00:10:34.033 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:34.033 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:34.033 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.971 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9ac10db5-1a4e-4b11-8a43-31ef104e74e1 MY_SNAPSHOT 00:10:35.539 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3fb2317b-52c9-4bfa-a133-30329c80609b 00:10:35.539 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9ac10db5-1a4e-4b11-8a43-31ef104e74e1 30 00:10:35.797 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3fb2317b-52c9-4bfa-a133-30329c80609b MY_CLONE 00:10:36.363 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fa54315d-9b3c-4f5b-9cb7-b9ffe9b89fd9 00:10:36.363 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fa54315d-9b3c-4f5b-9cb7-b9ffe9b89fd9 00:10:37.298 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2430270 00:10:45.454 Initializing NVMe Controllers 00:10:45.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:45.454 Controller IO queue size 128, less than required. 00:10:45.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:45.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:45.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:45.454 Initialization complete. Launching workers. 00:10:45.454 ======================================================== 00:10:45.454 Latency(us) 00:10:45.454 Device Information : IOPS MiB/s Average min max 00:10:45.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9625.90 37.60 13308.26 2244.16 89224.64 00:10:45.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9490.10 37.07 13502.13 2353.70 81303.07 00:10:45.454 ======================================================== 00:10:45.454 Total : 19116.00 74.67 13404.50 2244.16 89224.64 00:10:45.454 00:10:45.454 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:45.454 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9ac10db5-1a4e-4b11-8a43-31ef104e74e1 00:10:45.454 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49cbe498-8dab-4e1c-8d70-0170f090d30d 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.024 rmmod nvme_tcp 00:10:46.024 rmmod nvme_fabrics 00:10:46.024 rmmod nvme_keyring 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2429701 ']' 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2429701 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2429701 ']' 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2429701 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2429701 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2429701' 00:10:46.024 killing process with pid 2429701 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2429701 00:10:46.024 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2429701 00:10:46.284 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.284 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.284 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.284 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.284 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.284 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.284 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.284 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.824 00:10:48.824 real 0m22.011s 00:10:48.824 user 1m14.491s 00:10:48.824 sys 0m6.507s 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:48.824 ************************************ 00:10:48.824 END TEST nvmf_lvol 00:10:48.824 ************************************ 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.824 ************************************ 00:10:48.824 START TEST nvmf_lvs_grow 00:10:48.824 ************************************ 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:48.824 * Looking for test storage... 00:10:48.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.824 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:51.365 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:51.365 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:51.365 Found net devices under 0000:84:00.0: cvl_0_0 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:51.365 Found net devices under 0000:84:00.1: cvl_0_1 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.365 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:10:51.366 00:10:51.366 --- 10.0.0.2 ping statistics --- 00:10:51.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.366 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:10:51.366 00:10:51.366 --- 10.0.0.1 ping statistics --- 00:10:51.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.366 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.366 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2433687 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2433687 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2433687 ']' 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.366 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.366 [2024-07-26 14:05:08.081631] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:10:51.366 [2024-07-26 14:05:08.081724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.366 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.366 [2024-07-26 14:05:08.156247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.625 [2024-07-26 14:05:08.283123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.625 [2024-07-26 14:05:08.283172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.625 [2024-07-26 14:05:08.283187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.625 [2024-07-26 14:05:08.283201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.625 [2024-07-26 14:05:08.283217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.625 [2024-07-26 14:05:08.283248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.625 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.625 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:51.625 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.625 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.625 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.625 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.625 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:52.194 [2024-07-26 14:05:08.992619] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:52.194 ************************************ 00:10:52.194 START TEST lvs_grow_clean 00:10:52.194 ************************************ 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:52.194 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:52.760 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:52.760 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:53.019 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:10:53.019 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:10:53.019 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:53.278 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:53.278 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:53.278 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 lvol 150 00:10:53.536 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dca4757c-a530-4be0-9317-1a1c357a4077 00:10:53.536 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:53.536 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:53.796 [2024-07-26 14:05:10.531110] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:53.796 [2024-07-26 14:05:10.531196] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:53.796 true 00:10:53.796 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:10:53.796 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:54.055 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:54.055 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:54.315 14:05:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dca4757c-a530-4be0-9317-1a1c357a4077 00:10:54.575 14:05:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:54.835 [2024-07-26 14:05:11.698720] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.835 14:05:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2434153 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2434153 /var/tmp/bdevperf.sock 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2434153 ']' 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:55.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.403 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:55.403 [2024-07-26 14:05:12.055069] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:10:55.403 [2024-07-26 14:05:12.055156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434153 ] 00:10:55.403 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.403 [2024-07-26 14:05:12.122328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.403 [2024-07-26 14:05:12.243506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.662 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.662 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:55.662 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:56.232 Nvme0n1 00:10:56.232 14:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:56.801 [ 00:10:56.801 { 00:10:56.801 "name": "Nvme0n1", 00:10:56.801 "aliases": [ 00:10:56.801 "dca4757c-a530-4be0-9317-1a1c357a4077" 00:10:56.801 ], 00:10:56.801 "product_name": "NVMe disk", 00:10:56.801 "block_size": 4096, 00:10:56.801 "num_blocks": 38912, 00:10:56.801 "uuid": "dca4757c-a530-4be0-9317-1a1c357a4077", 00:10:56.801 "assigned_rate_limits": { 00:10:56.801 "rw_ios_per_sec": 0, 00:10:56.801 "rw_mbytes_per_sec": 0, 00:10:56.801 "r_mbytes_per_sec": 0, 00:10:56.801 "w_mbytes_per_sec": 0 00:10:56.801 }, 00:10:56.801 "claimed": false, 00:10:56.801 "zoned": false, 00:10:56.801 "supported_io_types": { 00:10:56.801 "read": true, 00:10:56.801 "write": true, 00:10:56.801 "unmap": true, 00:10:56.801 "flush": true, 00:10:56.801 "reset": true, 00:10:56.801 "nvme_admin": true, 00:10:56.801 "nvme_io": true, 00:10:56.801 "nvme_io_md": false, 00:10:56.801 "write_zeroes": true, 00:10:56.801 "zcopy": false, 00:10:56.801 "get_zone_info": false, 00:10:56.801 "zone_management": false, 00:10:56.801 "zone_append": false, 00:10:56.801 "compare": true, 00:10:56.801 "compare_and_write": true, 00:10:56.801 "abort": true, 00:10:56.801 "seek_hole": false, 00:10:56.801 "seek_data": false, 00:10:56.801 "copy": true, 00:10:56.801 "nvme_iov_md": false 00:10:56.801 }, 00:10:56.801 "memory_domains": [ 00:10:56.801 { 00:10:56.801 "dma_device_id": "system", 00:10:56.801 "dma_device_type": 1 00:10:56.801 } 00:10:56.801 ], 00:10:56.801 "driver_specific": { 00:10:56.801 "nvme": [ 00:10:56.801 { 00:10:56.801 "trid": { 00:10:56.801 "trtype": "TCP", 00:10:56.801 "adrfam": "IPv4", 00:10:56.801 "traddr": "10.0.0.2", 00:10:56.801 "trsvcid": "4420", 00:10:56.801 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:56.801 }, 00:10:56.801 "ctrlr_data": { 00:10:56.801 "cntlid": 1, 00:10:56.801 "vendor_id": "0x8086", 00:10:56.801 "model_number": "SPDK bdev Controller", 00:10:56.801 "serial_number": "SPDK0", 00:10:56.801 "firmware_revision": "24.09", 00:10:56.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:56.801 "oacs": { 00:10:56.801 "security": 0, 00:10:56.801 "format": 0, 00:10:56.801 "firmware": 0, 00:10:56.801 "ns_manage": 0 00:10:56.801 }, 00:10:56.801 "multi_ctrlr": true, 00:10:56.801 "ana_reporting": false 00:10:56.801 }, 00:10:56.801 "vs": { 00:10:56.801 "nvme_version": "1.3" 00:10:56.801 }, 00:10:56.801 "ns_data": { 00:10:56.801 "id": 1, 00:10:56.801 "can_share": true 00:10:56.801 } 00:10:56.801 } 00:10:56.801 ], 00:10:56.801 "mp_policy": "active_passive" 00:10:56.801 } 00:10:56.801 } 00:10:56.801 ] 00:10:56.801 14:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2434396 00:10:56.801 14:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:56.801 14:05:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:57.060 Running I/O for 10 seconds... 00:10:57.998 Latency(us) 00:10:57.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.998 Nvme0n1 : 1.00 14183.00 55.40 0.00 0.00 0.00 0.00 0.00 00:10:57.998 =================================================================================================================== 00:10:57.998 Total : 14183.00 55.40 0.00 0.00 0.00 0.00 0.00 00:10:57.998 00:10:58.951 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:10:58.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.951 Nvme0n1 : 2.00 14276.00 55.77 0.00 0.00 0.00 0.00 0.00 00:10:58.951 =================================================================================================================== 00:10:58.951 Total : 14276.00 55.77 0.00 0.00 0.00 0.00 0.00 00:10:58.951 00:10:59.209 true 00:10:59.209 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:10:59.209 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:59.468 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:59.468 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:59.468 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2434396 00:11:00.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.072 Nvme0n1 : 3.00 14383.00 56.18 0.00 0.00 0.00 0.00 0.00 00:11:00.072 =================================================================================================================== 00:11:00.072 Total : 14383.00 56.18 0.00 0.00 0.00 0.00 0.00 00:11:00.072 00:11:01.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.009 Nvme0n1 : 4.00 14443.50 56.42 0.00 0.00 0.00 0.00 0.00 00:11:01.009 =================================================================================================================== 00:11:01.009 Total : 14443.50 56.42 0.00 0.00 0.00 0.00 0.00 00:11:01.009 00:11:01.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.946 Nvme0n1 : 5.00 14503.40 56.65 0.00 0.00 0.00 0.00 0.00 00:11:01.946 =================================================================================================================== 00:11:01.946 Total : 14503.40 56.65 0.00 0.00 0.00 0.00 0.00 00:11:01.946 00:11:02.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.883 Nvme0n1 : 6.00 14548.00 56.83 0.00 0.00 0.00 0.00 0.00 00:11:02.883 =================================================================================================================== 00:11:02.883 Total : 14548.00 56.83 0.00 0.00 0.00 0.00 0.00 00:11:02.883 00:11:04.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.261 Nvme0n1 : 7.00 14567.57 56.90 0.00 0.00 0.00 0.00 0.00 00:11:04.261 =================================================================================================================== 00:11:04.261 Total : 14567.57 56.90 0.00 0.00 0.00 0.00 0.00 00:11:04.261 00:11:05.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.196 Nvme0n1 : 8.00 14604.25 57.05 0.00 0.00 0.00 0.00 0.00 00:11:05.196 =================================================================================================================== 00:11:05.196 Total : 14604.25 57.05 0.00 0.00 0.00 0.00 0.00 00:11:05.196 00:11:06.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.133 Nvme0n1 : 9.00 14639.44 57.19 0.00 0.00 0.00 0.00 0.00 00:11:06.133 =================================================================================================================== 00:11:06.133 Total : 14639.44 57.19 0.00 0.00 0.00 0.00 0.00 00:11:06.133 00:11:07.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.069 Nvme0n1 : 10.00 14663.90 57.28 0.00 0.00 0.00 0.00 0.00 00:11:07.069 =================================================================================================================== 00:11:07.069 Total : 14663.90 57.28 0.00 0.00 0.00 0.00 0.00 00:11:07.069 00:11:07.069 00:11:07.069 Latency(us) 00:11:07.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.069 Nvme0n1 : 10.01 14661.22 57.27 0.00 0.00 8724.17 2293.76 16602.45 00:11:07.069 =================================================================================================================== 00:11:07.069 Total : 14661.22 57.27 0.00 0.00 8724.17 2293.76 16602.45 00:11:07.069 0 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2434153 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2434153 ']' 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2434153 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2434153 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2434153' 00:11:07.069 killing process with pid 2434153 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2434153 00:11:07.069 Received shutdown signal, test time was about 10.000000 seconds 00:11:07.069 00:11:07.069 Latency(us) 00:11:07.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.069 =================================================================================================================== 00:11:07.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:07.069 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2434153 00:11:07.327 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.585 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:08.152 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:11:08.152 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:08.411 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:08.411 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:08.411 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:08.980 [2024-07-26 14:05:25.663300] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:08.981 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:11:09.239 request: 00:11:09.239 { 00:11:09.239 "uuid": "80bbbd4f-49cf-45d9-bec0-5ae58357dc88", 00:11:09.239 "method": "bdev_lvol_get_lvstores", 00:11:09.239 "req_id": 1 00:11:09.239 } 00:11:09.239 Got JSON-RPC error response 00:11:09.239 response: 00:11:09.239 { 00:11:09.239 "code": -19, 00:11:09.239 "message": "No such device" 00:11:09.239 } 00:11:09.239 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:09.239 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:09.239 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:09.239 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:09.239 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:09.498 aio_bdev 00:11:09.498 14:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dca4757c-a530-4be0-9317-1a1c357a4077 00:11:09.498 14:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=dca4757c-a530-4be0-9317-1a1c357a4077 00:11:09.498 14:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.498 14:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:11:09.498 14:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.498 14:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.498 14:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:10.065 14:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dca4757c-a530-4be0-9317-1a1c357a4077 -t 2000 00:11:10.323 [ 00:11:10.323 { 00:11:10.323 "name": "dca4757c-a530-4be0-9317-1a1c357a4077", 00:11:10.323 "aliases": [ 00:11:10.323 "lvs/lvol" 00:11:10.323 ], 00:11:10.323 "product_name": "Logical Volume", 00:11:10.323 "block_size": 4096, 00:11:10.323 "num_blocks": 38912, 00:11:10.323 "uuid": "dca4757c-a530-4be0-9317-1a1c357a4077", 00:11:10.323 "assigned_rate_limits": { 00:11:10.323 "rw_ios_per_sec": 0, 00:11:10.323 "rw_mbytes_per_sec": 0, 00:11:10.323 "r_mbytes_per_sec": 0, 00:11:10.323 "w_mbytes_per_sec": 0 00:11:10.323 }, 00:11:10.323 "claimed": false, 00:11:10.323 "zoned": false, 00:11:10.323 "supported_io_types": { 00:11:10.323 "read": true, 00:11:10.323 "write": true, 00:11:10.323 "unmap": true, 00:11:10.323 "flush": false, 00:11:10.323 "reset": true, 00:11:10.323 "nvme_admin": false, 00:11:10.323 "nvme_io": false, 00:11:10.323 "nvme_io_md": false, 00:11:10.323 "write_zeroes": true, 00:11:10.323 "zcopy": false, 00:11:10.323 "get_zone_info": false, 00:11:10.323 "zone_management": false, 00:11:10.323 "zone_append": false, 00:11:10.323 "compare": false, 00:11:10.323 "compare_and_write": false, 00:11:10.323 "abort": false, 00:11:10.323 "seek_hole": true, 00:11:10.323 "seek_data": true, 00:11:10.323 "copy": false, 00:11:10.323 "nvme_iov_md": false 00:11:10.323 }, 00:11:10.323 "driver_specific": { 00:11:10.323 "lvol": { 00:11:10.323 "lvol_store_uuid": "80bbbd4f-49cf-45d9-bec0-5ae58357dc88", 00:11:10.323 "base_bdev": "aio_bdev", 00:11:10.323 "thin_provision": false, 00:11:10.323 "num_allocated_clusters": 38, 00:11:10.323 "snapshot": false, 00:11:10.323 "clone": false, 00:11:10.323 "esnap_clone": false 00:11:10.323 } 00:11:10.323 } 00:11:10.323 } 00:11:10.323 ] 00:11:10.323 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:11:10.323 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:11:10.323 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:10.890 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:10.890 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:11:10.890 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:11.149 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:11.149 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dca4757c-a530-4be0-9317-1a1c357a4077 00:11:11.407 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 80bbbd4f-49cf-45d9-bec0-5ae58357dc88 00:11:11.666 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:11.924 00:11:11.924 real 0m19.627s 00:11:11.924 user 0m19.665s 00:11:11.924 sys 0m2.154s 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:11.924 ************************************ 00:11:11.924 END TEST lvs_grow_clean 00:11:11.924 ************************************ 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.924 ************************************ 00:11:11.924 START TEST lvs_grow_dirty 00:11:11.924 ************************************ 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:11.924 14:05:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:12.492 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:12.492 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:12.751 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:12.751 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:12.751 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:13.010 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:13.010 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:13.010 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab lvol 150 00:11:13.270 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=586c370c-1a0a-4e77-bda9-12bdb50054ce 00:11:13.270 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:13.270 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:13.528 [2024-07-26 14:05:30.237082] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:13.528 [2024-07-26 14:05:30.237181] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:13.528 true 00:11:13.528 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:13.529 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:13.787 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:13.787 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:14.045 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 586c370c-1a0a-4e77-bda9-12bdb50054ce 00:11:14.612 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:14.612 [2024-07-26 14:05:31.492893] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.870 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2436587 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2436587 /var/tmp/bdevperf.sock 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2436587 ']' 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:15.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.229 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:15.229 [2024-07-26 14:05:31.843483] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:15.229 [2024-07-26 14:05:31.843573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2436587 ] 00:11:15.229 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.229 [2024-07-26 14:05:31.910831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.229 [2024-07-26 14:05:32.035241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.487 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.487 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:15.487 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:16.425 Nvme0n1 00:11:16.425 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:16.684 [ 00:11:16.684 { 00:11:16.684 "name": "Nvme0n1", 00:11:16.684 "aliases": [ 00:11:16.684 "586c370c-1a0a-4e77-bda9-12bdb50054ce" 00:11:16.684 ], 00:11:16.684 "product_name": "NVMe disk", 00:11:16.684 "block_size": 4096, 00:11:16.684 "num_blocks": 38912, 00:11:16.684 "uuid": "586c370c-1a0a-4e77-bda9-12bdb50054ce", 00:11:16.684 "assigned_rate_limits": { 00:11:16.684 "rw_ios_per_sec": 0, 00:11:16.684 "rw_mbytes_per_sec": 0, 00:11:16.684 "r_mbytes_per_sec": 0, 00:11:16.684 "w_mbytes_per_sec": 0 00:11:16.684 }, 00:11:16.684 "claimed": false, 00:11:16.684 "zoned": false, 00:11:16.684 "supported_io_types": { 00:11:16.684 "read": true, 00:11:16.684 "write": true, 00:11:16.684 "unmap": true, 00:11:16.684 "flush": true, 00:11:16.684 "reset": true, 00:11:16.684 "nvme_admin": true, 00:11:16.684 "nvme_io": true, 00:11:16.684 "nvme_io_md": false, 00:11:16.684 "write_zeroes": true, 00:11:16.684 "zcopy": false, 00:11:16.684 "get_zone_info": false, 00:11:16.684 "zone_management": false, 00:11:16.684 "zone_append": false, 00:11:16.684 "compare": true, 00:11:16.684 "compare_and_write": true, 00:11:16.684 "abort": true, 00:11:16.684 "seek_hole": false, 00:11:16.684 "seek_data": false, 00:11:16.684 "copy": true, 00:11:16.684 "nvme_iov_md": false 00:11:16.684 }, 00:11:16.684 "memory_domains": [ 00:11:16.684 { 00:11:16.684 "dma_device_id": "system", 00:11:16.684 "dma_device_type": 1 00:11:16.684 } 00:11:16.684 ], 00:11:16.684 "driver_specific": { 00:11:16.684 "nvme": [ 00:11:16.684 { 00:11:16.684 "trid": { 00:11:16.684 "trtype": "TCP", 00:11:16.684 "adrfam": "IPv4", 00:11:16.684 "traddr": "10.0.0.2", 00:11:16.684 "trsvcid": "4420", 00:11:16.684 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:16.684 }, 00:11:16.684 "ctrlr_data": { 00:11:16.684 "cntlid": 1, 00:11:16.684 "vendor_id": "0x8086", 00:11:16.684 "model_number": "SPDK bdev Controller", 00:11:16.684 "serial_number": "SPDK0", 00:11:16.684 "firmware_revision": "24.09", 00:11:16.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:16.684 "oacs": { 00:11:16.684 "security": 0, 00:11:16.684 "format": 0, 00:11:16.684 "firmware": 0, 00:11:16.684 "ns_manage": 0 00:11:16.684 }, 00:11:16.684 "multi_ctrlr": true, 00:11:16.684 "ana_reporting": false 00:11:16.684 }, 00:11:16.684 "vs": { 00:11:16.684 "nvme_version": "1.3" 00:11:16.684 }, 00:11:16.684 "ns_data": { 00:11:16.684 "id": 1, 00:11:16.684 "can_share": true 00:11:16.684 } 00:11:16.684 } 00:11:16.684 ], 00:11:16.684 "mp_policy": "active_passive" 00:11:16.684 } 00:11:16.684 } 00:11:16.684 ] 00:11:16.944 14:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2436734 00:11:16.944 14:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:16.944 14:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:16.944 Running I/O for 10 seconds... 00:11:18.321 Latency(us) 00:11:18.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:18.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.321 Nvme0n1 : 1.00 14002.00 54.70 0.00 0.00 0.00 0.00 0.00 00:11:18.321 =================================================================================================================== 00:11:18.321 Total : 14002.00 54.70 0.00 0.00 0.00 0.00 0.00 00:11:18.321 00:11:18.889 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:19.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.147 Nvme0n1 : 2.00 14213.50 55.52 0.00 0.00 0.00 0.00 0.00 00:11:19.147 =================================================================================================================== 00:11:19.147 Total : 14213.50 55.52 0.00 0.00 0.00 0.00 0.00 00:11:19.147 00:11:19.147 true 00:11:19.147 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:19.147 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:19.715 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:19.715 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:19.715 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2436734 00:11:19.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.974 Nvme0n1 : 3.00 14286.67 55.81 0.00 0.00 0.00 0.00 0.00 00:11:19.974 =================================================================================================================== 00:11:19.974 Total : 14286.67 55.81 0.00 0.00 0.00 0.00 0.00 00:11:19.974 00:11:20.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.909 Nvme0n1 : 4.00 14382.25 56.18 0.00 0.00 0.00 0.00 0.00 00:11:20.909 =================================================================================================================== 00:11:20.909 Total : 14382.25 56.18 0.00 0.00 0.00 0.00 0.00 00:11:20.909 00:11:22.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.286 Nvme0n1 : 5.00 14440.20 56.41 0.00 0.00 0.00 0.00 0.00 00:11:22.286 =================================================================================================================== 00:11:22.286 Total : 14440.20 56.41 0.00 0.00 0.00 0.00 0.00 00:11:22.286 00:11:23.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.221 Nvme0n1 : 6.00 14473.33 56.54 0.00 0.00 0.00 0.00 0.00 00:11:23.221 =================================================================================================================== 00:11:23.221 Total : 14473.33 56.54 0.00 0.00 0.00 0.00 0.00 00:11:23.221 00:11:24.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.156 Nvme0n1 : 7.00 14496.43 56.63 0.00 0.00 0.00 0.00 0.00 00:11:24.156 =================================================================================================================== 00:11:24.156 Total : 14496.43 56.63 0.00 0.00 0.00 0.00 0.00 00:11:24.156 00:11:25.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.092 Nvme0n1 : 8.00 14536.12 56.78 0.00 0.00 0.00 0.00 0.00 00:11:25.092 =================================================================================================================== 00:11:25.092 Total : 14536.12 56.78 0.00 0.00 0.00 0.00 0.00 00:11:25.092 00:11:26.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.028 Nvme0n1 : 9.00 14571.89 56.92 0.00 0.00 0.00 0.00 0.00 00:11:26.028 =================================================================================================================== 00:11:26.028 Total : 14571.89 56.92 0.00 0.00 0.00 0.00 0.00 00:11:26.028 00:11:26.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.964 Nvme0n1 : 10.00 14585.40 56.97 0.00 0.00 0.00 0.00 0.00 00:11:26.964 =================================================================================================================== 00:11:26.964 Total : 14585.40 56.97 0.00 0.00 0.00 0.00 0.00 00:11:26.964 00:11:26.964 00:11:26.964 Latency(us) 00:11:26.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.964 Nvme0n1 : 10.01 14589.77 56.99 0.00 0.00 8768.21 2427.26 16019.91 00:11:26.964 =================================================================================================================== 00:11:26.965 Total : 14589.77 56.99 0.00 0.00 8768.21 2427.26 16019.91 00:11:26.965 0 00:11:26.965 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2436587 00:11:26.965 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2436587 ']' 00:11:26.965 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2436587 00:11:26.965 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:11:26.965 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.965 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2436587 00:11:27.224 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:27.224 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:27.224 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2436587' 00:11:27.224 killing process with pid 2436587 00:11:27.224 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2436587 00:11:27.224 Received shutdown signal, test time was about 10.000000 seconds 00:11:27.224 00:11:27.224 Latency(us) 00:11:27.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.224 =================================================================================================================== 00:11:27.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:27.224 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2436587 00:11:27.483 14:05:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.053 14:05:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:28.622 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:28.622 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2433687 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2433687 00:11:28.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2433687 Killed "${NVMF_APP[@]}" "$@" 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2438197 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2438197 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2438197 ']' 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.881 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.881 [2024-07-26 14:05:45.737723] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:28.881 [2024-07-26 14:05:45.737840] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.139 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.139 [2024-07-26 14:05:45.821635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.139 [2024-07-26 14:05:45.945573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.139 [2024-07-26 14:05:45.945639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.139 [2024-07-26 14:05:45.945656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.139 [2024-07-26 14:05:45.945678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.139 [2024-07-26 14:05:45.945691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.139 [2024-07-26 14:05:45.945742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.398 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.398 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:29.398 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.398 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.398 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:29.398 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.398 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:29.656 [2024-07-26 14:05:46.382398] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:29.656 [2024-07-26 14:05:46.382555] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:29.656 [2024-07-26 14:05:46.382612] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:29.656 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:29.656 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 586c370c-1a0a-4e77-bda9-12bdb50054ce 00:11:29.656 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=586c370c-1a0a-4e77-bda9-12bdb50054ce 00:11:29.656 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:29.656 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:29.656 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:29.656 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:29.656 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:29.916 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 586c370c-1a0a-4e77-bda9-12bdb50054ce -t 2000 00:11:30.176 [ 00:11:30.176 { 00:11:30.176 "name": "586c370c-1a0a-4e77-bda9-12bdb50054ce", 00:11:30.176 "aliases": [ 00:11:30.176 "lvs/lvol" 00:11:30.176 ], 00:11:30.176 "product_name": "Logical Volume", 00:11:30.176 "block_size": 4096, 00:11:30.176 "num_blocks": 38912, 00:11:30.176 "uuid": "586c370c-1a0a-4e77-bda9-12bdb50054ce", 00:11:30.176 "assigned_rate_limits": { 00:11:30.176 "rw_ios_per_sec": 0, 00:11:30.176 "rw_mbytes_per_sec": 0, 00:11:30.176 "r_mbytes_per_sec": 0, 00:11:30.176 "w_mbytes_per_sec": 0 00:11:30.176 }, 00:11:30.176 "claimed": false, 00:11:30.176 "zoned": false, 00:11:30.176 "supported_io_types": { 00:11:30.176 "read": true, 00:11:30.176 "write": true, 00:11:30.176 "unmap": true, 00:11:30.176 "flush": false, 00:11:30.176 "reset": true, 00:11:30.176 "nvme_admin": false, 00:11:30.176 "nvme_io": false, 00:11:30.176 "nvme_io_md": false, 00:11:30.176 "write_zeroes": true, 00:11:30.176 "zcopy": false, 00:11:30.176 "get_zone_info": false, 00:11:30.176 "zone_management": false, 00:11:30.176 "zone_append": false, 00:11:30.176 "compare": false, 00:11:30.176 "compare_and_write": false, 00:11:30.176 "abort": false, 00:11:30.176 "seek_hole": true, 00:11:30.176 "seek_data": true, 00:11:30.176 "copy": false, 00:11:30.176 "nvme_iov_md": false 00:11:30.176 }, 00:11:30.176 "driver_specific": { 00:11:30.176 "lvol": { 00:11:30.176 "lvol_store_uuid": "3c029b3f-e44f-4e9f-bc46-0407d72fc7ab", 00:11:30.176 "base_bdev": "aio_bdev", 00:11:30.176 "thin_provision": false, 00:11:30.176 "num_allocated_clusters": 38, 00:11:30.176 "snapshot": false, 00:11:30.176 "clone": false, 00:11:30.176 "esnap_clone": false 00:11:30.176 } 00:11:30.176 } 00:11:30.176 } 00:11:30.176 ] 00:11:30.176 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:30.176 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:30.176 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:30.434 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:30.434 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:30.434 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:30.692 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:30.692 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:31.335 [2024-07-26 14:05:47.843861] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:31.335 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:31.335 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:31.335 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:31.336 14:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:31.336 request: 00:11:31.336 { 00:11:31.336 "uuid": "3c029b3f-e44f-4e9f-bc46-0407d72fc7ab", 00:11:31.336 "method": "bdev_lvol_get_lvstores", 00:11:31.336 "req_id": 1 00:11:31.336 } 00:11:31.336 Got JSON-RPC error response 00:11:31.336 response: 00:11:31.336 { 00:11:31.336 "code": -19, 00:11:31.336 "message": "No such device" 00:11:31.336 } 00:11:31.595 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:31.595 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:31.595 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:31.595 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:31.596 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:32.163 aio_bdev 00:11:32.163 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 586c370c-1a0a-4e77-bda9-12bdb50054ce 00:11:32.163 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=586c370c-1a0a-4e77-bda9-12bdb50054ce 00:11:32.164 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:32.164 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:32.164 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:32.164 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:32.164 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:32.422 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 586c370c-1a0a-4e77-bda9-12bdb50054ce -t 2000 00:11:32.681 [ 00:11:32.681 { 00:11:32.681 "name": "586c370c-1a0a-4e77-bda9-12bdb50054ce", 00:11:32.681 "aliases": [ 00:11:32.681 "lvs/lvol" 00:11:32.681 ], 00:11:32.681 "product_name": "Logical Volume", 00:11:32.681 "block_size": 4096, 00:11:32.681 "num_blocks": 38912, 00:11:32.681 "uuid": "586c370c-1a0a-4e77-bda9-12bdb50054ce", 00:11:32.681 "assigned_rate_limits": { 00:11:32.681 "rw_ios_per_sec": 0, 00:11:32.682 "rw_mbytes_per_sec": 0, 00:11:32.682 "r_mbytes_per_sec": 0, 00:11:32.682 "w_mbytes_per_sec": 0 00:11:32.682 }, 00:11:32.682 "claimed": false, 00:11:32.682 "zoned": false, 00:11:32.682 "supported_io_types": { 00:11:32.682 "read": true, 00:11:32.682 "write": true, 00:11:32.682 "unmap": true, 00:11:32.682 "flush": false, 00:11:32.682 "reset": true, 00:11:32.682 "nvme_admin": false, 00:11:32.682 "nvme_io": false, 00:11:32.682 "nvme_io_md": false, 00:11:32.682 "write_zeroes": true, 00:11:32.682 "zcopy": false, 00:11:32.682 "get_zone_info": false, 00:11:32.682 "zone_management": false, 00:11:32.682 "zone_append": false, 00:11:32.682 "compare": false, 00:11:32.682 "compare_and_write": false, 00:11:32.682 "abort": false, 00:11:32.682 "seek_hole": true, 00:11:32.682 "seek_data": true, 00:11:32.682 "copy": false, 00:11:32.682 "nvme_iov_md": false 00:11:32.682 }, 00:11:32.682 "driver_specific": { 00:11:32.682 "lvol": { 00:11:32.682 "lvol_store_uuid": "3c029b3f-e44f-4e9f-bc46-0407d72fc7ab", 00:11:32.682 "base_bdev": "aio_bdev", 00:11:32.682 "thin_provision": false, 00:11:32.682 "num_allocated_clusters": 38, 00:11:32.682 "snapshot": false, 00:11:32.682 "clone": false, 00:11:32.682 "esnap_clone": false 00:11:32.682 } 00:11:32.682 } 00:11:32.682 } 00:11:32.682 ] 00:11:32.682 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:32.682 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:32.682 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:32.942 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:32.942 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:32.942 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:33.200 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:33.200 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 586c370c-1a0a-4e77-bda9-12bdb50054ce 00:11:33.459 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3c029b3f-e44f-4e9f-bc46-0407d72fc7ab 00:11:33.719 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:33.979 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:33.979 00:11:33.979 real 0m22.129s 00:11:33.979 user 0m56.250s 00:11:33.979 sys 0m5.521s 00:11:33.979 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.979 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:33.979 ************************************ 00:11:33.979 END TEST lvs_grow_dirty 00:11:33.979 ************************************ 00:11:34.238 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:34.238 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:34.238 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:34.238 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:34.238 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:34.239 nvmf_trace.0 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.239 rmmod nvme_tcp 00:11:34.239 rmmod nvme_fabrics 00:11:34.239 rmmod nvme_keyring 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2438197 ']' 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2438197 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2438197 ']' 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2438197 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.239 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2438197 00:11:34.239 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.239 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.239 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2438197' 00:11:34.239 killing process with pid 2438197 00:11:34.239 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2438197 00:11:34.239 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2438197 00:11:34.499 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.499 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.499 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.499 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.499 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.499 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.499 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.499 14:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.037 00:11:37.037 real 0m48.208s 00:11:37.037 user 1m23.037s 00:11:37.037 sys 0m10.193s 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:37.037 ************************************ 00:11:37.037 END TEST nvmf_lvs_grow 00:11:37.037 ************************************ 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:37.037 ************************************ 00:11:37.037 START TEST nvmf_bdev_io_wait 00:11:37.037 ************************************ 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:37.037 * Looking for test storage... 00:11:37.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:37.037 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.038 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:39.584 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:39.584 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:39.584 Found net devices under 0000:84:00.0: cvl_0_0 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:39.584 Found net devices under 0000:84:00.1: cvl_0_1 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.584 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:39.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:11:39.585 00:11:39.585 --- 10.0.0.2 ping statistics --- 00:11:39.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.585 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:11:39.585 00:11:39.585 --- 10.0.0.1 ping statistics --- 00:11:39.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.585 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2440875 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2440875 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2440875 ']' 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.585 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.585 [2024-07-26 14:05:56.282175] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:39.585 [2024-07-26 14:05:56.282288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.585 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.585 [2024-07-26 14:05:56.365238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.844 [2024-07-26 14:05:56.492979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.844 [2024-07-26 14:05:56.493038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.844 [2024-07-26 14:05:56.493054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.844 [2024-07-26 14:05:56.493067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.844 [2024-07-26 14:05:56.493079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.844 [2024-07-26 14:05:56.493188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.844 [2024-07-26 14:05:56.493279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.845 [2024-07-26 14:05:56.493351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.845 [2024-07-26 14:05:56.493355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 [2024-07-26 14:05:56.650068] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 Malloc0 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 [2024-07-26 14:05:56.712167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2440903 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2440904 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2440907 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:39.845 { 00:11:39.845 "params": { 00:11:39.845 "name": "Nvme$subsystem", 00:11:39.845 "trtype": "$TEST_TRANSPORT", 00:11:39.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.845 "adrfam": "ipv4", 00:11:39.845 "trsvcid": "$NVMF_PORT", 00:11:39.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.845 "hdgst": ${hdgst:-false}, 00:11:39.845 "ddgst": ${ddgst:-false} 00:11:39.845 }, 00:11:39.845 "method": "bdev_nvme_attach_controller" 00:11:39.845 } 00:11:39.845 EOF 00:11:39.845 )") 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:39.845 { 00:11:39.845 "params": { 00:11:39.845 "name": "Nvme$subsystem", 00:11:39.845 "trtype": "$TEST_TRANSPORT", 00:11:39.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.845 "adrfam": "ipv4", 00:11:39.845 "trsvcid": "$NVMF_PORT", 00:11:39.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.845 "hdgst": ${hdgst:-false}, 00:11:39.845 "ddgst": ${ddgst:-false} 00:11:39.845 }, 00:11:39.845 "method": "bdev_nvme_attach_controller" 00:11:39.845 } 00:11:39.845 EOF 00:11:39.845 )") 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2440909 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:39.845 { 00:11:39.845 "params": { 00:11:39.845 "name": "Nvme$subsystem", 00:11:39.845 "trtype": "$TEST_TRANSPORT", 00:11:39.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.845 "adrfam": "ipv4", 00:11:39.845 "trsvcid": "$NVMF_PORT", 00:11:39.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.845 "hdgst": ${hdgst:-false}, 00:11:39.845 "ddgst": ${ddgst:-false} 00:11:39.845 }, 00:11:39.845 "method": "bdev_nvme_attach_controller" 00:11:39.845 } 00:11:39.845 EOF 00:11:39.845 )") 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:39.845 { 00:11:39.845 "params": { 00:11:39.845 "name": "Nvme$subsystem", 00:11:39.845 "trtype": "$TEST_TRANSPORT", 00:11:39.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.845 "adrfam": "ipv4", 00:11:39.845 "trsvcid": "$NVMF_PORT", 00:11:39.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.845 "hdgst": ${hdgst:-false}, 00:11:39.845 "ddgst": ${ddgst:-false} 00:11:39.845 }, 00:11:39.845 "method": "bdev_nvme_attach_controller" 00:11:39.845 } 00:11:39.845 EOF 00:11:39.845 )") 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2440903 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:39.845 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:39.846 "params": { 00:11:39.846 "name": "Nvme1", 00:11:39.846 "trtype": "tcp", 00:11:39.846 "traddr": "10.0.0.2", 00:11:39.846 "adrfam": "ipv4", 00:11:39.846 "trsvcid": "4420", 00:11:39.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.846 "hdgst": false, 00:11:39.846 "ddgst": false 00:11:39.846 }, 00:11:39.846 "method": "bdev_nvme_attach_controller" 00:11:39.846 }' 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:39.846 "params": { 00:11:39.846 "name": "Nvme1", 00:11:39.846 "trtype": "tcp", 00:11:39.846 "traddr": "10.0.0.2", 00:11:39.846 "adrfam": "ipv4", 00:11:39.846 "trsvcid": "4420", 00:11:39.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.846 "hdgst": false, 00:11:39.846 "ddgst": false 00:11:39.846 }, 00:11:39.846 "method": "bdev_nvme_attach_controller" 00:11:39.846 }' 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:39.846 "params": { 00:11:39.846 "name": "Nvme1", 00:11:39.846 "trtype": "tcp", 00:11:39.846 "traddr": "10.0.0.2", 00:11:39.846 "adrfam": "ipv4", 00:11:39.846 "trsvcid": "4420", 00:11:39.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.846 "hdgst": false, 00:11:39.846 "ddgst": false 00:11:39.846 }, 00:11:39.846 "method": "bdev_nvme_attach_controller" 00:11:39.846 }' 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:39.846 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:39.846 "params": { 00:11:39.846 "name": "Nvme1", 00:11:39.846 "trtype": "tcp", 00:11:39.846 "traddr": "10.0.0.2", 00:11:39.846 "adrfam": "ipv4", 00:11:39.846 "trsvcid": "4420", 00:11:39.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.846 "hdgst": false, 00:11:39.846 "ddgst": false 00:11:39.846 }, 00:11:39.846 "method": "bdev_nvme_attach_controller" 00:11:39.846 }' 00:11:40.104 [2024-07-26 14:05:56.761732] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:40.104 [2024-07-26 14:05:56.761733] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:40.104 [2024-07-26 14:05:56.761732] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:40.104 [2024-07-26 14:05:56.761827] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 14:05:56.761827] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:40.104 --proc-type=auto ] 00:11:40.104 [2024-07-26 14:05:56.761831] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:40.104 [2024-07-26 14:05:56.766867] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:40.104 [2024-07-26 14:05:56.766947] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:40.104 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.104 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.104 [2024-07-26 14:05:56.960358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.363 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.363 [2024-07-26 14:05:57.068661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:40.363 [2024-07-26 14:05:57.076008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.363 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.363 [2024-07-26 14:05:57.183204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:40.363 [2024-07-26 14:05:57.188422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.621 [2024-07-26 14:05:57.266029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.621 [2024-07-26 14:05:57.300232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:40.621 [2024-07-26 14:05:57.370547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:40.879 Running I/O for 1 seconds... 00:11:40.879 Running I/O for 1 seconds... 00:11:40.879 Running I/O for 1 seconds... 00:11:40.879 Running I/O for 1 seconds... 00:11:41.813 00:11:41.813 Latency(us) 00:11:41.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.813 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:41.813 Nvme1n1 : 1.00 177645.21 693.93 0.00 0.00 717.79 286.72 952.70 00:11:41.813 =================================================================================================================== 00:11:41.813 Total : 177645.21 693.93 0.00 0.00 717.79 286.72 952.70 00:11:41.813 00:11:41.813 Latency(us) 00:11:41.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.813 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:41.813 Nvme1n1 : 1.02 5603.69 21.89 0.00 0.00 22630.89 8495.41 38253.61 00:11:41.813 =================================================================================================================== 00:11:41.813 Total : 5603.69 21.89 0.00 0.00 22630.89 8495.41 38253.61 00:11:41.813 00:11:41.813 Latency(us) 00:11:41.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.813 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:41.813 Nvme1n1 : 1.01 9451.48 36.92 0.00 0.00 13481.99 6456.51 23010.42 00:11:41.813 =================================================================================================================== 00:11:41.813 Total : 9451.48 36.92 0.00 0.00 13481.99 6456.51 23010.42 00:11:41.813 00:11:41.813 Latency(us) 00:11:41.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.813 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:41.813 Nvme1n1 : 1.01 5485.19 21.43 0.00 0.00 23250.19 6505.05 47380.10 00:11:41.813 =================================================================================================================== 00:11:41.813 Total : 5485.19 21.43 0.00 0.00 23250.19 6505.05 47380.10 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2440904 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2440907 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2440909 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:42.379 rmmod nvme_tcp 00:11:42.379 rmmod nvme_fabrics 00:11:42.379 rmmod nvme_keyring 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2440875 ']' 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2440875 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2440875 ']' 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2440875 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2440875 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2440875' 00:11:42.379 killing process with pid 2440875 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2440875 00:11:42.379 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2440875 00:11:42.637 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:42.637 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:42.638 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:42.638 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.638 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:42.638 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.638 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.638 14:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:45.170 00:11:45.170 real 0m8.064s 00:11:45.170 user 0m18.416s 00:11:45.170 sys 0m4.066s 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:45.170 ************************************ 00:11:45.170 END TEST nvmf_bdev_io_wait 00:11:45.170 ************************************ 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.170 ************************************ 00:11:45.170 START TEST nvmf_queue_depth 00:11:45.170 ************************************ 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:45.170 * Looking for test storage... 00:11:45.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.170 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:47.715 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.715 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:47.716 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:47.716 Found net devices under 0000:84:00.0: cvl_0_0 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:47.716 Found net devices under 0000:84:00.1: cvl_0_1 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:47.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:11:47.716 00:11:47.716 --- 10.0.0.2 ping statistics --- 00:11:47.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.716 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:11:47.716 00:11:47.716 --- 10.0.0.1 ping statistics --- 00:11:47.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.716 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2443384 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2443384 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2443384 ']' 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.716 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 [2024-07-26 14:06:04.374423] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:47.716 [2024-07-26 14:06:04.374556] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.716 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.716 [2024-07-26 14:06:04.459570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.974 [2024-07-26 14:06:04.605457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.974 [2024-07-26 14:06:04.605533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.974 [2024-07-26 14:06:04.605550] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.974 [2024-07-26 14:06:04.605564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.974 [2024-07-26 14:06:04.605576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.974 [2024-07-26 14:06:04.605609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.974 [2024-07-26 14:06:04.787742] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.974 Malloc0 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:47.974 [2024-07-26 14:06:04.846712] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2443405 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2443405 /var/tmp/bdevperf.sock 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2443405 ']' 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:47.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:47.974 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.975 14:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.232 [2024-07-26 14:06:04.906120] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:11:48.232 [2024-07-26 14:06:04.906212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2443405 ] 00:11:48.232 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.232 [2024-07-26 14:06:04.981339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.232 [2024-07-26 14:06:05.104736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.489 14:06:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.489 14:06:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:48.489 14:06:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:48.489 14:06:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.489 14:06:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:48.746 NVMe0n1 00:11:48.746 14:06:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.746 14:06:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:48.746 Running I/O for 10 seconds... 00:12:00.979 00:12:00.979 Latency(us) 00:12:00.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.979 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:00.979 Verification LBA range: start 0x0 length 0x4000 00:12:00.979 NVMe0n1 : 10.10 8193.68 32.01 0.00 0.00 124401.22 24855.13 77283.93 00:12:00.979 =================================================================================================================== 00:12:00.979 Total : 8193.68 32.01 0.00 0.00 124401.22 24855.13 77283.93 00:12:00.979 0 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2443405 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2443405 ']' 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2443405 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2443405 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2443405' 00:12:00.979 killing process with pid 2443405 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2443405 00:12:00.979 Received shutdown signal, test time was about 10.000000 seconds 00:12:00.979 00:12:00.979 Latency(us) 00:12:00.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.979 =================================================================================================================== 00:12:00.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:00.979 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2443405 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.979 rmmod nvme_tcp 00:12:00.979 rmmod nvme_fabrics 00:12:00.979 rmmod nvme_keyring 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2443384 ']' 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2443384 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2443384 ']' 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2443384 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2443384 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2443384' 00:12:00.979 killing process with pid 2443384 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2443384 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2443384 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.979 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.916 00:12:01.916 real 0m17.021s 00:12:01.916 user 0m23.142s 00:12:01.916 sys 0m3.894s 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:01.916 ************************************ 00:12:01.916 END TEST nvmf_queue_depth 00:12:01.916 ************************************ 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:01.916 ************************************ 00:12:01.916 START TEST nvmf_target_multipath 00:12:01.916 ************************************ 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:01.916 * Looking for test storage... 00:12:01.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.916 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.917 14:06:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:04.451 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:04.451 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:04.451 Found net devices under 0000:84:00.0: cvl_0_0 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:04.451 Found net devices under 0000:84:00.1: cvl_0_1 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.451 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.452 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:12:04.711 00:12:04.711 --- 10.0.0.2 ping statistics --- 00:12:04.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.711 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:12:04.711 00:12:04.711 --- 10.0.0.1 ping statistics --- 00:12:04.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.711 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:04.711 only one NIC for nvmf test 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.711 rmmod nvme_tcp 00:12:04.711 rmmod nvme_fabrics 00:12:04.711 rmmod nvme_keyring 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.711 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.246 00:12:07.246 real 0m4.965s 00:12:07.246 user 0m0.932s 00:12:07.246 sys 0m2.036s 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:07.246 ************************************ 00:12:07.246 END TEST nvmf_target_multipath 00:12:07.246 ************************************ 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:07.246 ************************************ 00:12:07.246 START TEST nvmf_zcopy 00:12:07.246 ************************************ 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:07.246 * Looking for test storage... 00:12:07.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.246 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.247 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:09.780 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:09.780 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.780 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:09.780 Found net devices under 0000:84:00.0: cvl_0_0 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:09.781 Found net devices under 0000:84:00.1: cvl_0_1 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:09.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:12:09.781 00:12:09.781 --- 10.0.0.2 ping statistics --- 00:12:09.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.781 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:12:09.781 00:12:09.781 --- 10.0.0.1 ping statistics --- 00:12:09.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.781 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2449248 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2449248 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2449248 ']' 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.781 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:09.781 [2024-07-26 14:06:26.521571] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:12:09.781 [2024-07-26 14:06:26.521657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.781 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.781 [2024-07-26 14:06:26.606043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.040 [2024-07-26 14:06:26.748722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.040 [2024-07-26 14:06:26.748792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.040 [2024-07-26 14:06:26.748812] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.040 [2024-07-26 14:06:26.748828] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.040 [2024-07-26 14:06:26.748842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.040 [2024-07-26 14:06:26.748891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.040 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.040 [2024-07-26 14:06:26.924106] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 [2024-07-26 14:06:26.940364] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 malloc0 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:10.298 { 00:12:10.298 "params": { 00:12:10.298 "name": "Nvme$subsystem", 00:12:10.298 "trtype": "$TEST_TRANSPORT", 00:12:10.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.298 "adrfam": "ipv4", 00:12:10.298 "trsvcid": "$NVMF_PORT", 00:12:10.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.298 "hdgst": ${hdgst:-false}, 00:12:10.298 "ddgst": ${ddgst:-false} 00:12:10.298 }, 00:12:10.298 "method": "bdev_nvme_attach_controller" 00:12:10.298 } 00:12:10.298 EOF 00:12:10.298 )") 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:10.298 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:10.298 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:10.298 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:10.298 "params": { 00:12:10.298 "name": "Nvme1", 00:12:10.298 "trtype": "tcp", 00:12:10.298 "traddr": "10.0.0.2", 00:12:10.298 "adrfam": "ipv4", 00:12:10.298 "trsvcid": "4420", 00:12:10.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.298 "hdgst": false, 00:12:10.298 "ddgst": false 00:12:10.298 }, 00:12:10.298 "method": "bdev_nvme_attach_controller" 00:12:10.298 }' 00:12:10.298 [2024-07-26 14:06:27.047730] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:12:10.298 [2024-07-26 14:06:27.047824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449302 ] 00:12:10.298 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.298 [2024-07-26 14:06:27.121971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.557 [2024-07-26 14:06:27.246841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.557 Running I/O for 10 seconds... 00:12:22.754 00:12:22.754 Latency(us) 00:12:22.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.754 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:22.754 Verification LBA range: start 0x0 length 0x1000 00:12:22.754 Nvme1n1 : 10.02 5659.07 44.21 0.00 0.00 22555.41 3252.53 33010.73 00:12:22.754 =================================================================================================================== 00:12:22.754 Total : 5659.07 44.21 0.00 0.00 22555.41 3252.53 33010.73 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2450598 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.754 { 00:12:22.754 "params": { 00:12:22.754 "name": "Nvme$subsystem", 00:12:22.754 "trtype": "$TEST_TRANSPORT", 00:12:22.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.754 "adrfam": "ipv4", 00:12:22.754 "trsvcid": "$NVMF_PORT", 00:12:22.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.754 "hdgst": ${hdgst:-false}, 00:12:22.754 "ddgst": ${ddgst:-false} 00:12:22.754 }, 00:12:22.754 "method": "bdev_nvme_attach_controller" 00:12:22.754 } 00:12:22.754 EOF 00:12:22.754 )") 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:22.754 [2024-07-26 14:06:37.785030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.754 [2024-07-26 14:06:37.785092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:22.754 14:06:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.754 "params": { 00:12:22.754 "name": "Nvme1", 00:12:22.754 "trtype": "tcp", 00:12:22.754 "traddr": "10.0.0.2", 00:12:22.754 "adrfam": "ipv4", 00:12:22.754 "trsvcid": "4420", 00:12:22.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.754 "hdgst": false, 00:12:22.754 "ddgst": false 00:12:22.754 }, 00:12:22.754 "method": "bdev_nvme_attach_controller" 00:12:22.754 }' 00:12:22.754 [2024-07-26 14:06:37.792973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.754 [2024-07-26 14:06:37.793008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.754 [2024-07-26 14:06:37.800986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.754 [2024-07-26 14:06:37.801017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.754 [2024-07-26 14:06:37.809005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.754 [2024-07-26 14:06:37.809036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.754 [2024-07-26 14:06:37.817028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.754 [2024-07-26 14:06:37.817058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.754 [2024-07-26 14:06:37.825049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.754 [2024-07-26 14:06:37.825079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.754 [2024-07-26 14:06:37.833074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.754 [2024-07-26 14:06:37.833104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.754 [2024-07-26 14:06:37.836687] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:12:22.755 [2024-07-26 14:06:37.836774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450598 ] 00:12:22.755 [2024-07-26 14:06:37.841096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.841127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.849118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.849147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.857140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.857170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.865164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.865194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.873188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.873218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.755 [2024-07-26 14:06:37.881192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.881217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.889210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.889234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.897232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.897256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.905254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.905278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.911618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.755 [2024-07-26 14:06:37.913278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.913302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.921340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.921381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.929330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.929357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.937347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.937372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.945369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.945394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.953390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.953415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.961412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.961444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.969446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.969476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.977480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.977510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.985514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.985554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:37.993504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:37.993531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.001519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.001544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.009542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.009567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.017564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.017589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.025586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.025626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.033609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.033635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.036090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.755 [2024-07-26 14:06:38.041629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.041654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.049652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.049678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.057698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.057736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.065720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.065758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.073745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.073782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.081771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.081810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.089793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.089833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.097818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.097857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.105809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.105835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.113862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.113900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.121886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.121927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.129907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.129945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.137908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.137934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.145921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.145946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.153945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.153970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.161981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.162012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.170002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.170032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.178027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.178056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.186053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.186081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.194070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.194095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.202092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.202117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.210118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.210143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.218144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.755 [2024-07-26 14:06:38.218169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.755 [2024-07-26 14:06:38.226171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.226198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.234199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.234227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.242221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.242250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.250240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.250268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.259092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.259123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.266289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.266318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 Running I/O for 5 seconds... 00:12:22.756 [2024-07-26 14:06:38.274312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.274338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.289940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.289974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.301275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.301306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.312635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.312667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.324336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.324367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.335781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.335812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.347266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.347303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.359005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.359038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.370677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.370707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.382897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.382928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.396580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.396612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.407482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.407513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.419208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.419239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.430580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.430611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.442936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.442967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.456665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.456697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.467298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.467328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.478856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.478898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.490239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.490269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.501700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.501731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.513028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.513058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.525041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.525070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.536680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.536710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.547702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.547733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.559544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.559583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.571510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.571549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.583125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.583156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.596978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.597018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.607827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.607857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.619720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.619750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.631614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.631645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.643362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.643393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.654896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.654927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.666461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.666501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.677754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.677785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.689048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.689089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.700714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.700745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.712222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.712253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.723988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.724018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.735751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.735781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.746979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.747009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.760111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.760142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.771169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.771199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.782810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.782850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.794563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.794610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.756 [2024-07-26 14:06:38.808311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.756 [2024-07-26 14:06:38.808343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.819449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.819486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.830960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.830999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.842733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.842764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.854629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.854664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.865833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.865864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.877347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.877378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.889252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.889293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.901208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.901238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.912718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.912760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.924406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.924456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.936021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.936062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.947601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.947631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.959415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.959454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.971016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.971046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.982779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.982820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:38.994459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:38.994493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.006341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.006372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.018078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.018113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.029878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.029920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.041669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.041699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.053143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.053174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.064822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.064853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.076005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.076036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.087712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.087742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.099378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.099408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.110931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.110961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.122540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.122577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.133979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.134009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.145535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.145566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.157351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.157382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.168863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.168893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.180266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.180296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.191596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.191636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.202970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.203000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.214544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.214575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.226093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.226123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.237966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.237997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.249278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.249308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.260840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.260870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.272654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.272685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.284372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.284402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.295707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.295737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.307163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.307194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.318668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.318698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.330578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.330613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.342112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.342142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.355438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.355468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.366041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.366072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.377726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.377757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.389451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.389483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.401026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.401062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.757 [2024-07-26 14:06:39.412830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.757 [2024-07-26 14:06:39.412860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.424608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.424638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.436312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.436343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.448012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.448043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.459655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.459685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.470881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.470913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.482336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.482366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.493866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.493901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.505719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.505755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.517714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.517743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.529604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.529634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.540808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.540838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.552567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.552604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.564223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.564253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.575691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.575720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.587102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.587133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.598379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.598420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.609807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.609837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.620868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.620898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.758 [2024-07-26 14:06:39.632810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.758 [2024-07-26 14:06:39.632840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.644092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.644122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.655422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.655462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.667259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.667289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.678854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.678885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.690529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.690561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.701799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.701841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.713007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.713037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.724583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.724614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.736250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.736284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.749846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.749876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.761356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.761386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.772960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.772991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.784160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.784190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.797326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.797356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.807664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.807696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.819305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.819334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.830448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.830478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.841947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.841988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.853076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.853117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.864423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.864461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.875688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.875728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.887185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.887233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.018 [2024-07-26 14:06:39.898508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.018 [2024-07-26 14:06:39.898550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:39.910080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:39.910111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:39.921610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:39.921646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:39.933448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:39.933478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:39.945101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:39.945131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:39.956732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:39.956764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:39.968103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:39.968143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:39.980040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:39.980081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:39.991875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:39.991916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.003701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.003731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.016084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.016117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.027802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.027843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.039835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.039867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.051968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.052004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.064004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.064034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.075121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.075152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.088511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.088542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.276 [2024-07-26 14:06:40.099201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.276 [2024-07-26 14:06:40.099231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.277 [2024-07-26 14:06:40.110599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.277 [2024-07-26 14:06:40.110639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.277 [2024-07-26 14:06:40.122077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.277 [2024-07-26 14:06:40.122107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.277 [2024-07-26 14:06:40.133258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.277 [2024-07-26 14:06:40.133289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.277 [2024-07-26 14:06:40.144689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.277 [2024-07-26 14:06:40.144719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.277 [2024-07-26 14:06:40.158069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.277 [2024-07-26 14:06:40.158099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.168995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.169025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.180796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.180826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.192527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.192557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.203729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.203759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.215798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.215829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.227991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.228022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.239147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.239177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.250158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.250188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.261514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.261544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.272645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.272675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.283800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.283831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.296714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.296745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.306996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.307027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.318251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.318282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.329833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.329871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.341413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.341452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.352514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.352544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.364099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.364142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.376024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.376054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.387731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.387762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.399352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.399393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.535 [2024-07-26 14:06:40.411123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.535 [2024-07-26 14:06:40.411154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.794 [2024-07-26 14:06:40.423308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.794 [2024-07-26 14:06:40.423339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.794 [2024-07-26 14:06:40.437257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.794 [2024-07-26 14:06:40.437288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.794 [2024-07-26 14:06:40.448337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.794 [2024-07-26 14:06:40.448368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.794 [2024-07-26 14:06:40.459995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.794 [2024-07-26 14:06:40.460026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.794 [2024-07-26 14:06:40.471790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.471821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.483374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.483405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.494698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.494728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.507896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.507927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.518186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.518218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.529965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.529997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.541397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.541436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.552442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.552485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.563950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.563981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.575485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.575515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.588668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.588711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.599469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.599508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.611524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.611555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.623806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.623838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.635507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.635539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.647359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.647400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.660865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.660896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.795 [2024-07-26 14:06:40.671861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.795 [2024-07-26 14:06:40.671895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.683667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.683697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.695633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.695665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.707416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.707456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.719197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.719228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.730769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.730803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.744280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.744311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.755363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.755404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.766947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.766988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.778467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.778498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.789796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.789837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.800838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.800869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.812318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.812350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.824054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.824084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.835750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.835780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.847248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.847277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.859069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.859099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.870768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.870803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.882841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.882882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.894524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.894565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.906248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.906278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.918039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.918070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.929568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.929598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.080 [2024-07-26 14:06:40.941138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.080 [2024-07-26 14:06:40.941179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:40.952635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:40.952670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:40.964276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:40.964306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:40.975836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:40.975867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:40.987647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:40.987685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:40.999551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:40.999582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.011667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.011707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.023282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.023324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.034637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.034667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.046393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.046423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.057905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.057936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.069526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.069556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.082778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.082808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.093565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.093606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.105219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.105249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.117166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.117197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.128416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.128455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.139811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.139841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.150941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.150971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.162570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.162604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.174061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.174092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.343 [2024-07-26 14:06:41.185404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.343 [2024-07-26 14:06:41.185452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.344 [2024-07-26 14:06:41.197051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.344 [2024-07-26 14:06:41.197082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.344 [2024-07-26 14:06:41.208703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.344 [2024-07-26 14:06:41.208734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.344 [2024-07-26 14:06:41.220293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.344 [2024-07-26 14:06:41.220335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.232008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.232038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.243657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.243688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.255401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.255439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.266918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.266948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.278763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.278793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.290022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.290060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.301787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.301818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.315143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.315173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.325936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.325977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.337169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.337200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.348927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.348957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.360521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.360551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.372467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.372497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.384215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.384255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.396252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.396292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.408069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.408108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.419892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.419922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.431135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.431170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.443244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.443273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.455134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.455164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.466675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.466716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.602 [2024-07-26 14:06:41.478454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.602 [2024-07-26 14:06:41.478483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.489891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.489930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.501887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.501928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.513468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.513508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.525087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.525119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.537127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.537168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.549380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.549410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.562735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.562773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.573261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.573291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.585574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.585616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.597153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.597191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.608815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.608855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.621016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.621048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.633571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.633601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.644754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.644785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.655976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.656014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.667749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.667791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.679857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.679888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.691406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.691447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.703306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.703339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.860 [2024-07-26 14:06:41.715014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.860 [2024-07-26 14:06:41.715045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.861 [2024-07-26 14:06:41.726992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.861 [2024-07-26 14:06:41.727023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.861 [2024-07-26 14:06:41.738847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.861 [2024-07-26 14:06:41.738876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.750352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.750382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.761964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.761994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.773963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.773994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.785876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.785913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.797838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.797868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.809594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.809623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.820989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.821018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.832355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.832384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.843661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.843690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.855263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.855292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.867167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.867196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.878771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.878809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.890426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.890465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.902679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.902709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.914701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.914731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.926365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.926400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.938172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.938201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.949884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.949914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.961824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.961853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.973742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.973771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.985362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.985391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.119 [2024-07-26 14:06:41.996800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.119 [2024-07-26 14:06:41.996829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.008492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.008525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.020305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.020335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.031945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.031974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.043642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.043672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.056946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.056983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.067841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.067871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.079663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.079693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.091692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.091722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.103564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.103613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.115042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.115073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.126621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.126652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.138087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.138129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.149949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.149987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.161630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.161660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.172729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.172759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.184125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.184165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.195914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.195944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.207140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.207170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.218729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.377 [2024-07-26 14:06:42.218759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.377 [2024-07-26 14:06:42.229894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.378 [2024-07-26 14:06:42.229923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.378 [2024-07-26 14:06:42.241315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.378 [2024-07-26 14:06:42.241346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.378 [2024-07-26 14:06:42.252994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.378 [2024-07-26 14:06:42.253024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.264436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.264465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.275990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.276019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.290041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.290071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.301237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.301267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.312661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.312691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.325920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.325971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.336879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.336919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.348344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.348374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.359790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.359828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.370814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.370844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.382499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.382528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.394217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.394246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.406014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.406044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.418303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.418333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.429833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.636 [2024-07-26 14:06:42.429863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.636 [2024-07-26 14:06:42.441633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.637 [2024-07-26 14:06:42.441671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.637 [2024-07-26 14:06:42.453290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.637 [2024-07-26 14:06:42.453320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.637 [2024-07-26 14:06:42.464864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.637 [2024-07-26 14:06:42.464893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.637 [2024-07-26 14:06:42.477941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.637 [2024-07-26 14:06:42.477971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.637 [2024-07-26 14:06:42.488952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.637 [2024-07-26 14:06:42.488984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.637 [2024-07-26 14:06:42.501173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.637 [2024-07-26 14:06:42.501213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.637 [2024-07-26 14:06:42.512885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.637 [2024-07-26 14:06:42.512914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.524405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.524444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.535985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.536015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.547658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.547688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.559119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.559148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.570536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.570567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.581980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.582010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.593492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.593522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.604838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.604868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.616142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.616171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.627786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.627823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.639783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.639813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.651289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.651319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.894 [2024-07-26 14:06:42.662962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.894 [2024-07-26 14:06:42.662991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.674811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.674841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.686377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.686415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.698125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.698162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.709393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.709424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.720592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.720623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.732062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.732092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.743594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.743624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.754958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.754988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.766423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.766462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.895 [2024-07-26 14:06:42.777829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.895 [2024-07-26 14:06:42.777859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.152 [2024-07-26 14:06:42.790658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.152 [2024-07-26 14:06:42.790688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.152 [2024-07-26 14:06:42.801730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.152 [2024-07-26 14:06:42.801761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.152 [2024-07-26 14:06:42.812959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.812989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.823670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.823700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.839056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.839088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.850041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.850072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.861300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.861331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.874539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.874569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.885203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.885233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.896390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.896421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.909480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.909510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.920019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.920049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.932094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.932124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.943525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.943555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.954673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.954703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.966078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.966108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.977232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.977262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.988643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.988674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:42.999732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:42.999762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:43.011251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:43.011281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:43.022803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:43.022832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.153 [2024-07-26 14:06:43.034151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.153 [2024-07-26 14:06:43.034181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.410 [2024-07-26 14:06:43.045574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.410 [2024-07-26 14:06:43.045608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.410 [2024-07-26 14:06:43.057215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.410 [2024-07-26 14:06:43.057245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.410 [2024-07-26 14:06:43.068922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.410 [2024-07-26 14:06:43.068952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.410 [2024-07-26 14:06:43.080620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.410 [2024-07-26 14:06:43.080650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.410 [2024-07-26 14:06:43.092552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.410 [2024-07-26 14:06:43.092593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.410 [2024-07-26 14:06:43.104285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.410 [2024-07-26 14:06:43.104314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.116216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.116246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.127706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.127736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.139355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.139386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.150772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.150812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.162217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.162247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.173632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.173666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.185159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.185189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.196600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.196629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.210141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.210170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.221254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.221284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.232880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.232909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.244665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.244704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.256290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.256319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.267933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.267963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.279705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.279735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.291568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.291598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.411 [2024-07-26 14:06:43.295667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.411 [2024-07-26 14:06:43.295695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 00:12:26.669 Latency(us) 00:12:26.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.669 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:26.669 Nvme1n1 : 5.01 10971.01 85.71 0.00 0.00 11652.87 5339.97 22136.60 00:12:26.669 =================================================================================================================== 00:12:26.669 Total : 10971.01 85.71 0.00 0.00 11652.87 5339.97 22136.60 00:12:26.669 [2024-07-26 14:06:43.303688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.303716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.311704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.311731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.319726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.319752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.327793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.327840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.335821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.335869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.343844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.343895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.351861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.351924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.359888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.359937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.367911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.367961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.375927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.375976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.383950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.384000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.391981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.392030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.400006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.400056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.408023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.408073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.416046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.416096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.424064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.424113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.432086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.432134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.440113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.440162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.448099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.448133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.456103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.456128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.464126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.464149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.669 [2024-07-26 14:06:43.472146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.669 [2024-07-26 14:06:43.472169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.480175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.480200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.488233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.488272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.496259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.496306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.504287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.504353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.512256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.512281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.520276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.520300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.528299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.528323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.536322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.536346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.544347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.544372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.670 [2024-07-26 14:06:43.552423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.670 [2024-07-26 14:06:43.552481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.928 [2024-07-26 14:06:43.560450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.928 [2024-07-26 14:06:43.560496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.928 [2024-07-26 14:06:43.568421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.928 [2024-07-26 14:06:43.568458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.928 [2024-07-26 14:06:43.576438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.928 [2024-07-26 14:06:43.576461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.928 [2024-07-26 14:06:43.584459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.928 [2024-07-26 14:06:43.584483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2450598) - No such process 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2450598 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:26.928 delay0 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.928 14:06:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:26.928 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.928 [2024-07-26 14:06:43.719667] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:35.037 Initializing NVMe Controllers 00:12:35.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:35.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:35.037 Initialization complete. Launching workers. 00:12:35.037 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 294, failed: 9899 00:12:35.037 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10123, failed to submit 70 00:12:35.037 success 9977, unsuccess 146, failed 0 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.037 rmmod nvme_tcp 00:12:35.037 rmmod nvme_fabrics 00:12:35.037 rmmod nvme_keyring 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2449248 ']' 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2449248 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2449248 ']' 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2449248 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2449248 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2449248' 00:12:35.037 killing process with pid 2449248 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2449248 00:12:35.037 14:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2449248 00:12:35.037 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.037 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.037 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.037 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.037 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.037 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.037 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.037 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.420 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.420 00:12:36.420 real 0m29.575s 00:12:36.420 user 0m41.858s 00:12:36.420 sys 0m10.399s 00:12:36.420 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.420 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:36.421 ************************************ 00:12:36.421 END TEST nvmf_zcopy 00:12:36.421 ************************************ 00:12:36.421 14:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:36.421 14:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:36.421 14:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.421 14:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:36.421 ************************************ 00:12:36.421 START TEST nvmf_nmic 00:12:36.421 ************************************ 00:12:36.421 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:36.679 * Looking for test storage... 00:12:36.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:36.679 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.680 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:39.213 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:39.213 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:39.213 Found net devices under 0000:84:00.0: cvl_0_0 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.213 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:39.214 Found net devices under 0000:84:00.1: cvl_0_1 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.214 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:12:39.473 00:12:39.473 --- 10.0.0.2 ping statistics --- 00:12:39.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.473 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:12:39.473 00:12:39.473 --- 10.0.0.1 ping statistics --- 00:12:39.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.473 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2454137 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2454137 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2454137 ']' 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.473 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.473 [2024-07-26 14:06:56.332597] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:12:39.473 [2024-07-26 14:06:56.332693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.732 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.732 [2024-07-26 14:06:56.415760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.732 [2024-07-26 14:06:56.540952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.732 [2024-07-26 14:06:56.541011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.732 [2024-07-26 14:06:56.541027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.732 [2024-07-26 14:06:56.541041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.732 [2024-07-26 14:06:56.541052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.732 [2024-07-26 14:06:56.541156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.732 [2024-07-26 14:06:56.541215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.732 [2024-07-26 14:06:56.541280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.732 [2024-07-26 14:06:56.541283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 [2024-07-26 14:06:56.698941] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 Malloc0 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 [2024-07-26 14:06:56.752889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:39.990 test case1: single bdev can't be used in multiple subsystems 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.990 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.990 [2024-07-26 14:06:56.776695] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:39.990 [2024-07-26 14:06:56.776728] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:39.990 [2024-07-26 14:06:56.776746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.990 request: 00:12:39.990 { 00:12:39.990 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:39.990 "namespace": { 00:12:39.990 "bdev_name": "Malloc0", 00:12:39.991 "no_auto_visible": false 00:12:39.991 }, 00:12:39.991 "method": "nvmf_subsystem_add_ns", 00:12:39.991 "req_id": 1 00:12:39.991 } 00:12:39.991 Got JSON-RPC error response 00:12:39.991 response: 00:12:39.991 { 00:12:39.991 "code": -32602, 00:12:39.991 "message": "Invalid parameters" 00:12:39.991 } 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:39.991 Adding namespace failed - expected result. 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:39.991 test case2: host connect to nvmf target in multiple paths 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.991 [2024-07-26 14:06:56.784832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.991 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.925 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:41.490 14:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.490 14:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.490 14:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.490 14:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.490 14:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.388 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.388 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.388 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.388 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.388 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.388 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:43.388 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:43.388 [global] 00:12:43.388 thread=1 00:12:43.388 invalidate=1 00:12:43.388 rw=write 00:12:43.388 time_based=1 00:12:43.388 runtime=1 00:12:43.388 ioengine=libaio 00:12:43.388 direct=1 00:12:43.388 bs=4096 00:12:43.388 iodepth=1 00:12:43.388 norandommap=0 00:12:43.388 numjobs=1 00:12:43.388 00:12:43.388 verify_dump=1 00:12:43.388 verify_backlog=512 00:12:43.388 verify_state_save=0 00:12:43.388 do_verify=1 00:12:43.388 verify=crc32c-intel 00:12:43.388 [job0] 00:12:43.388 filename=/dev/nvme0n1 00:12:43.388 Could not set queue depth (nvme0n1) 00:12:43.645 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.645 fio-3.35 00:12:43.645 Starting 1 thread 00:12:45.018 00:12:45.018 job0: (groupid=0, jobs=1): err= 0: pid=2454770: Fri Jul 26 14:07:01 2024 00:12:45.018 read: IOPS=327, BW=1311KiB/s (1343kB/s)(1360KiB/1037msec) 00:12:45.018 slat (nsec): min=6094, max=53635, avg=8581.66, stdev=5659.35 00:12:45.018 clat (usec): min=292, max=41096, avg=2609.74, stdev=9341.30 00:12:45.018 lat (usec): min=301, max=41115, avg=2618.32, stdev=9345.61 00:12:45.019 clat percentiles (usec): 00:12:45.019 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 326], 00:12:45.019 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:12:45.019 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[40633], 00:12:45.019 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:45.019 | 99.99th=[41157] 00:12:45.019 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:12:45.019 slat (usec): min=8, max=26867, avg=62.94, stdev=1186.93 00:12:45.019 clat (usec): min=180, max=321, avg=211.82, stdev=21.25 00:12:45.019 lat (usec): min=189, max=27103, avg=274.76, stdev=1188.21 00:12:45.019 clat percentiles (usec): 00:12:45.019 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:12:45.019 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:12:45.019 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 251], 00:12:45.019 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 322], 00:12:45.019 | 99.99th=[ 322] 00:12:45.019 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:45.019 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:45.019 lat (usec) : 250=56.92%, 500=40.73%, 750=0.12% 00:12:45.019 lat (msec) : 50=2.23% 00:12:45.019 cpu : usr=0.58%, sys=1.06%, ctx=855, majf=0, minf=2 00:12:45.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.019 issued rwts: total=340,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.019 00:12:45.019 Run status group 0 (all jobs): 00:12:45.019 READ: bw=1311KiB/s (1343kB/s), 1311KiB/s-1311KiB/s (1343kB/s-1343kB/s), io=1360KiB (1393kB), run=1037-1037msec 00:12:45.019 WRITE: bw=1975KiB/s (2022kB/s), 1975KiB/s-1975KiB/s (2022kB/s-2022kB/s), io=2048KiB (2097kB), run=1037-1037msec 00:12:45.019 00:12:45.019 Disk stats (read/write): 00:12:45.019 nvme0n1: ios=388/512, merge=0/0, ticks=1043/101, in_queue=1144, util=98.30% 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:45.019 rmmod nvme_tcp 00:12:45.019 rmmod nvme_fabrics 00:12:45.019 rmmod nvme_keyring 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2454137 ']' 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2454137 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2454137 ']' 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2454137 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2454137 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2454137' 00:12:45.019 killing process with pid 2454137 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2454137 00:12:45.019 14:07:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2454137 00:12:45.332 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.332 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:45.332 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:45.332 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.332 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:45.332 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.332 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.332 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:47.891 00:12:47.891 real 0m10.868s 00:12:47.891 user 0m23.106s 00:12:47.891 sys 0m2.971s 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:47.891 ************************************ 00:12:47.891 END TEST nvmf_nmic 00:12:47.891 ************************************ 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:47.891 ************************************ 00:12:47.891 START TEST nvmf_fio_target 00:12:47.891 ************************************ 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:47.891 * Looking for test storage... 00:12:47.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.891 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.892 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.426 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.426 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:50.426 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:50.426 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:50.426 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:50.426 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:50.426 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:50.426 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:50.427 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:50.427 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:50.427 Found net devices under 0000:84:00.0: cvl_0_0 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:50.427 Found net devices under 0000:84:00.1: cvl_0_1 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:50.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:12:50.427 00:12:50.427 --- 10.0.0.2 ping statistics --- 00:12:50.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.427 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:12:50.427 00:12:50.427 --- 10.0.0.1 ping statistics --- 00:12:50.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.427 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:50.427 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2456992 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2456992 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2456992 ']' 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.428 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.686 [2024-07-26 14:07:07.333691] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:12:50.686 [2024-07-26 14:07:07.333834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.686 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.686 [2024-07-26 14:07:07.427221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.686 [2024-07-26 14:07:07.550585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.686 [2024-07-26 14:07:07.550645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.686 [2024-07-26 14:07:07.550662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.686 [2024-07-26 14:07:07.550675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.686 [2024-07-26 14:07:07.550687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.686 [2024-07-26 14:07:07.550745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.686 [2024-07-26 14:07:07.551082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.686 [2024-07-26 14:07:07.551145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.686 [2024-07-26 14:07:07.551149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.944 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.944 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:50.944 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.944 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:50.944 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.944 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.944 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:51.509 [2024-07-26 14:07:08.179600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.509 14:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.074 14:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:52.074 14:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.333 14:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:52.333 14:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.899 14:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:52.899 14:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:53.464 14:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:53.464 14:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:53.722 14:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:54.287 14:07:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:54.287 14:07:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:54.851 14:07:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:54.851 14:07:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:55.109 14:07:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:55.109 14:07:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:55.674 14:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:56.239 14:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:56.239 14:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.513 14:07:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:56.513 14:07:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:56.773 14:07:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.337 [2024-07-26 14:07:13.964247] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.337 14:07:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:57.593 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:57.850 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.781 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:58.781 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.781 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.781 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:58.781 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:58.781 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:00.679 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:00.679 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:00.679 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.679 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:00.679 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.679 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:00.679 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:00.679 [global] 00:13:00.679 thread=1 00:13:00.679 invalidate=1 00:13:00.679 rw=write 00:13:00.679 time_based=1 00:13:00.679 runtime=1 00:13:00.679 ioengine=libaio 00:13:00.679 direct=1 00:13:00.679 bs=4096 00:13:00.679 iodepth=1 00:13:00.679 norandommap=0 00:13:00.679 numjobs=1 00:13:00.679 00:13:00.679 verify_dump=1 00:13:00.679 verify_backlog=512 00:13:00.679 verify_state_save=0 00:13:00.679 do_verify=1 00:13:00.679 verify=crc32c-intel 00:13:00.679 [job0] 00:13:00.679 filename=/dev/nvme0n1 00:13:00.679 [job1] 00:13:00.679 filename=/dev/nvme0n2 00:13:00.679 [job2] 00:13:00.679 filename=/dev/nvme0n3 00:13:00.679 [job3] 00:13:00.679 filename=/dev/nvme0n4 00:13:00.679 Could not set queue depth (nvme0n1) 00:13:00.679 Could not set queue depth (nvme0n2) 00:13:00.679 Could not set queue depth (nvme0n3) 00:13:00.679 Could not set queue depth (nvme0n4) 00:13:00.936 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.936 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.936 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.936 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.936 fio-3.35 00:13:00.936 Starting 4 threads 00:13:02.310 00:13:02.310 job0: (groupid=0, jobs=1): err= 0: pid=2458337: Fri Jul 26 14:07:18 2024 00:13:02.310 read: IOPS=612, BW=2448KiB/s (2507kB/s)(2524KiB/1031msec) 00:13:02.310 slat (nsec): min=5382, max=82658, avg=12157.57, stdev=5453.54 00:13:02.310 clat (usec): min=270, max=41409, avg=1177.39, stdev=5780.18 00:13:02.310 lat (usec): min=276, max=41424, avg=1189.55, stdev=5780.60 00:13:02.310 clat percentiles (usec): 00:13:02.310 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:13:02.310 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:13:02.310 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 424], 95.00th=[ 457], 00:13:02.310 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:02.310 | 99.99th=[41157] 00:13:02.310 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:13:02.310 slat (nsec): min=6640, max=87526, avg=12733.04, stdev=5644.51 00:13:02.310 clat (usec): min=183, max=767, avg=255.47, stdev=51.06 00:13:02.310 lat (usec): min=190, max=782, avg=268.20, stdev=52.83 00:13:02.310 clat percentiles (usec): 00:13:02.310 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 210], 00:13:02.311 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 245], 60.00th=[ 265], 00:13:02.311 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 347], 00:13:02.311 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 437], 99.95th=[ 766], 00:13:02.311 | 99.99th=[ 766] 00:13:02.311 bw ( KiB/s): min= 8192, max= 8192, per=51.55%, avg=8192.00, stdev= 0.00, samples=1 00:13:02.311 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:02.311 lat (usec) : 250=32.99%, 500=65.92%, 750=0.24%, 1000=0.06% 00:13:02.311 lat (msec) : 50=0.79% 00:13:02.311 cpu : usr=1.84%, sys=1.46%, ctx=1657, majf=0, minf=2 00:13:02.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.311 issued rwts: total=631,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.311 job1: (groupid=0, jobs=1): err= 0: pid=2458342: Fri Jul 26 14:07:18 2024 00:13:02.311 read: IOPS=657, BW=2629KiB/s (2692kB/s)(2632KiB/1001msec) 00:13:02.311 slat (nsec): min=4939, max=46365, avg=13579.53, stdev=5917.13 00:13:02.311 clat (usec): min=268, max=41507, avg=1066.79, stdev=5219.43 00:13:02.311 lat (usec): min=282, max=41515, avg=1080.37, stdev=5219.56 00:13:02.311 clat percentiles (usec): 00:13:02.311 | 1.00th=[ 285], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:13:02.311 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 379], 00:13:02.311 | 70.00th=[ 441], 80.00th=[ 478], 90.00th=[ 510], 95.00th=[ 537], 00:13:02.311 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:02.311 | 99.99th=[41681] 00:13:02.311 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:02.311 slat (nsec): min=6265, max=38930, avg=13118.60, stdev=6052.01 00:13:02.311 clat (usec): min=170, max=622, avg=263.24, stdev=60.59 00:13:02.311 lat (usec): min=177, max=643, avg=276.36, stdev=63.27 00:13:02.311 clat percentiles (usec): 00:13:02.311 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:13:02.311 | 30.00th=[ 221], 40.00th=[ 237], 50.00th=[ 253], 60.00th=[ 277], 00:13:02.311 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 367], 00:13:02.311 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 545], 99.95th=[ 627], 00:13:02.311 | 99.99th=[ 627] 00:13:02.311 bw ( KiB/s): min= 4096, max= 4096, per=25.78%, avg=4096.00, stdev= 0.00, samples=1 00:13:02.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:02.311 lat (usec) : 250=29.79%, 500=65.22%, 750=4.34% 00:13:02.311 lat (msec) : 50=0.65% 00:13:02.311 cpu : usr=1.10%, sys=2.30%, ctx=1682, majf=0, minf=1 00:13:02.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.311 issued rwts: total=658,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.311 job2: (groupid=0, jobs=1): err= 0: pid=2458343: Fri Jul 26 14:07:18 2024 00:13:02.311 read: IOPS=71, BW=284KiB/s (291kB/s)(292KiB/1027msec) 00:13:02.311 slat (nsec): min=7688, max=41564, avg=15548.01, stdev=6354.71 00:13:02.311 clat (usec): min=360, max=41344, avg=12114.04, stdev=18492.65 00:13:02.311 lat (usec): min=370, max=41355, avg=12129.58, stdev=18491.95 00:13:02.311 clat percentiles (usec): 00:13:02.311 | 1.00th=[ 363], 5.00th=[ 367], 10.00th=[ 396], 20.00th=[ 408], 00:13:02.311 | 30.00th=[ 429], 40.00th=[ 441], 50.00th=[ 469], 60.00th=[ 490], 00:13:02.311 | 70.00th=[ 570], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:02.311 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:02.311 | 99.99th=[41157] 00:13:02.311 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:13:02.311 slat (usec): min=7, max=1253, avg=13.78, stdev=55.07 00:13:02.311 clat (usec): min=203, max=1201, avg=259.58, stdev=66.81 00:13:02.311 lat (usec): min=212, max=1550, avg=273.36, stdev=87.89 00:13:02.311 clat percentiles (usec): 00:13:02.311 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:13:02.311 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 258], 00:13:02.311 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 338], 00:13:02.311 | 99.00th=[ 400], 99.50th=[ 783], 99.90th=[ 1205], 99.95th=[ 1205], 00:13:02.311 | 99.99th=[ 1205] 00:13:02.311 bw ( KiB/s): min= 4096, max= 4096, per=25.78%, avg=4096.00, stdev= 0.00, samples=1 00:13:02.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:02.311 lat (usec) : 250=47.18%, 500=47.52%, 750=1.20%, 1000=0.34% 00:13:02.311 lat (msec) : 2=0.17%, 50=3.59% 00:13:02.311 cpu : usr=0.19%, sys=0.68%, ctx=589, majf=0, minf=1 00:13:02.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.311 issued rwts: total=73,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.311 job3: (groupid=0, jobs=1): err= 0: pid=2458344: Fri Jul 26 14:07:18 2024 00:13:02.311 read: IOPS=1252, BW=5011KiB/s (5131kB/s)(5016KiB/1001msec) 00:13:02.311 slat (nsec): min=5705, max=41331, avg=14389.37, stdev=5848.50 00:13:02.311 clat (usec): min=324, max=1207, avg=449.10, stdev=62.36 00:13:02.311 lat (usec): min=338, max=1225, avg=463.49, stdev=64.27 00:13:02.311 clat percentiles (usec): 00:13:02.311 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 388], 00:13:02.311 | 30.00th=[ 412], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 465], 00:13:02.311 | 70.00th=[ 482], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 537], 00:13:02.311 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 824], 99.95th=[ 1205], 00:13:02.311 | 99.99th=[ 1205] 00:13:02.311 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:02.311 slat (nsec): min=7168, max=85772, avg=13801.77, stdev=7653.49 00:13:02.311 clat (usec): min=189, max=1043, avg=250.22, stdev=49.01 00:13:02.311 lat (usec): min=200, max=1060, avg=264.02, stdev=50.57 00:13:02.311 clat percentiles (usec): 00:13:02.311 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:13:02.311 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 247], 00:13:02.311 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 330], 00:13:02.311 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 988], 99.95th=[ 1045], 00:13:02.311 | 99.99th=[ 1045] 00:13:02.311 bw ( KiB/s): min= 8192, max= 8192, per=51.55%, avg=8192.00, stdev= 0.00, samples=1 00:13:02.311 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:02.311 lat (usec) : 250=33.91%, 500=55.99%, 750=9.93%, 1000=0.11% 00:13:02.311 lat (msec) : 2=0.07% 00:13:02.311 cpu : usr=2.10%, sys=4.50%, ctx=2791, majf=0, minf=1 00:13:02.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.311 issued rwts: total=1254,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.311 00:13:02.311 Run status group 0 (all jobs): 00:13:02.311 READ: bw=9.91MiB/s (10.4MB/s), 284KiB/s-5011KiB/s (291kB/s-5131kB/s), io=10.2MiB (10.7MB), run=1001-1031msec 00:13:02.311 WRITE: bw=15.5MiB/s (16.3MB/s), 1994KiB/s-6138KiB/s (2042kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1031msec 00:13:02.311 00:13:02.311 Disk stats (read/write): 00:13:02.311 nvme0n1: ios=675/1024, merge=0/0, ticks=571/255, in_queue=826, util=85.37% 00:13:02.311 nvme0n2: ios=562/556, merge=0/0, ticks=727/150, in_queue=877, util=89.68% 00:13:02.311 nvme0n3: ios=138/512, merge=0/0, ticks=822/125, in_queue=947, util=92.06% 00:13:02.311 nvme0n4: ios=1081/1320, merge=0/0, ticks=543/314, in_queue=857, util=96.04% 00:13:02.311 14:07:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:02.311 [global] 00:13:02.311 thread=1 00:13:02.311 invalidate=1 00:13:02.311 rw=randwrite 00:13:02.311 time_based=1 00:13:02.311 runtime=1 00:13:02.311 ioengine=libaio 00:13:02.311 direct=1 00:13:02.311 bs=4096 00:13:02.311 iodepth=1 00:13:02.311 norandommap=0 00:13:02.311 numjobs=1 00:13:02.311 00:13:02.311 verify_dump=1 00:13:02.311 verify_backlog=512 00:13:02.311 verify_state_save=0 00:13:02.311 do_verify=1 00:13:02.311 verify=crc32c-intel 00:13:02.311 [job0] 00:13:02.311 filename=/dev/nvme0n1 00:13:02.311 [job1] 00:13:02.311 filename=/dev/nvme0n2 00:13:02.311 [job2] 00:13:02.311 filename=/dev/nvme0n3 00:13:02.311 [job3] 00:13:02.311 filename=/dev/nvme0n4 00:13:02.311 Could not set queue depth (nvme0n1) 00:13:02.311 Could not set queue depth (nvme0n2) 00:13:02.311 Could not set queue depth (nvme0n3) 00:13:02.311 Could not set queue depth (nvme0n4) 00:13:02.311 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.311 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.311 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.311 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.311 fio-3.35 00:13:02.311 Starting 4 threads 00:13:03.724 00:13:03.724 job0: (groupid=0, jobs=1): err= 0: pid=2458571: Fri Jul 26 14:07:20 2024 00:13:03.724 read: IOPS=538, BW=2154KiB/s (2206kB/s)(2156KiB/1001msec) 00:13:03.724 slat (nsec): min=5965, max=32466, avg=11883.41, stdev=4756.89 00:13:03.724 clat (usec): min=299, max=41248, avg=1353.33, stdev=6238.07 00:13:03.724 lat (usec): min=305, max=41258, avg=1365.21, stdev=6238.74 00:13:03.724 clat percentiles (usec): 00:13:03.724 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 330], 00:13:03.724 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:13:03.724 | 70.00th=[ 371], 80.00th=[ 408], 90.00th=[ 502], 95.00th=[ 537], 00:13:03.724 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:03.724 | 99.99th=[41157] 00:13:03.724 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:03.724 slat (nsec): min=8367, max=41879, avg=10460.22, stdev=2338.11 00:13:03.724 clat (usec): min=197, max=965, avg=243.27, stdev=36.07 00:13:03.724 lat (usec): min=207, max=976, avg=253.73, stdev=36.60 00:13:03.724 clat percentiles (usec): 00:13:03.724 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:13:03.724 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:13:03.724 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:13:03.724 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 404], 99.95th=[ 963], 00:13:03.724 | 99.99th=[ 963] 00:13:03.724 bw ( KiB/s): min= 8192, max= 8192, per=50.85%, avg=8192.00, stdev= 0.00, samples=1 00:13:03.724 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:03.724 lat (usec) : 250=44.72%, 500=51.70%, 750=2.62%, 1000=0.06% 00:13:03.724 lat (msec) : 2=0.06%, 50=0.83% 00:13:03.724 cpu : usr=0.60%, sys=2.10%, ctx=1568, majf=0, minf=2 00:13:03.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:03.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.724 issued rwts: total=539,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:03.724 job1: (groupid=0, jobs=1): err= 0: pid=2458572: Fri Jul 26 14:07:20 2024 00:13:03.724 read: IOPS=522, BW=2090KiB/s (2140kB/s)(2092KiB/1001msec) 00:13:03.724 slat (nsec): min=7490, max=44336, avg=9415.02, stdev=2424.52 00:13:03.724 clat (usec): min=296, max=41279, avg=1370.44, stdev=6082.09 00:13:03.724 lat (usec): min=304, max=41288, avg=1379.85, stdev=6083.05 00:13:03.724 clat percentiles (usec): 00:13:03.724 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 338], 20.00th=[ 379], 00:13:03.724 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 424], 60.00th=[ 445], 00:13:03.724 | 70.00th=[ 461], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 537], 00:13:03.724 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:03.724 | 99.99th=[41157] 00:13:03.724 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:03.724 slat (nsec): min=9415, max=46579, avg=12044.99, stdev=2632.07 00:13:03.724 clat (usec): min=189, max=422, avg=256.01, stdev=42.12 00:13:03.724 lat (usec): min=199, max=432, avg=268.05, stdev=42.40 00:13:03.724 clat percentiles (usec): 00:13:03.724 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 221], 00:13:03.724 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 260], 00:13:03.724 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 326], 95.00th=[ 343], 00:13:03.724 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 408], 99.95th=[ 424], 00:13:03.724 | 99.99th=[ 424] 00:13:03.724 bw ( KiB/s): min= 6544, max= 6544, per=40.62%, avg=6544.00, stdev= 0.00, samples=1 00:13:03.724 iops : min= 1636, max= 1636, avg=1636.00, stdev= 0.00, samples=1 00:13:03.724 lat (usec) : 250=34.84%, 500=59.53%, 750=4.65% 00:13:03.724 lat (msec) : 2=0.06%, 4=0.06%, 10=0.06%, 50=0.78% 00:13:03.724 cpu : usr=1.50%, sys=1.70%, ctx=1551, majf=0, minf=1 00:13:03.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:03.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.724 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:03.724 job2: (groupid=0, jobs=1): err= 0: pid=2458573: Fri Jul 26 14:07:20 2024 00:13:03.724 read: IOPS=562, BW=2250KiB/s (2304kB/s)(2252KiB/1001msec) 00:13:03.724 slat (nsec): min=8486, max=25905, avg=11198.42, stdev=2276.58 00:13:03.724 clat (usec): min=368, max=43945, avg=1285.32, stdev=5905.33 00:13:03.724 lat (usec): min=378, max=43971, avg=1296.52, stdev=5906.54 00:13:03.724 clat percentiles (usec): 00:13:03.724 | 1.00th=[ 375], 5.00th=[ 379], 10.00th=[ 383], 20.00th=[ 392], 00:13:03.724 | 30.00th=[ 396], 40.00th=[ 400], 50.00th=[ 404], 60.00th=[ 412], 00:13:03.724 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 453], 95.00th=[ 478], 00:13:03.724 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:13:03.724 | 99.99th=[43779] 00:13:03.724 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:03.724 slat (nsec): min=8366, max=33947, avg=13695.62, stdev=2416.91 00:13:03.724 clat (usec): min=202, max=1003, avg=245.53, stdev=27.24 00:13:03.725 lat (usec): min=213, max=1013, avg=259.22, stdev=27.46 00:13:03.725 clat percentiles (usec): 00:13:03.725 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 235], 00:13:03.725 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:13:03.725 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:13:03.725 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 338], 99.95th=[ 1004], 00:13:03.725 | 99.99th=[ 1004] 00:13:03.725 bw ( KiB/s): min= 8192, max= 8192, per=50.85%, avg=8192.00, stdev= 0.00, samples=1 00:13:03.725 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:03.725 lat (usec) : 250=44.68%, 500=53.88%, 750=0.44%, 1000=0.13% 00:13:03.725 lat (msec) : 2=0.13%, 50=0.76% 00:13:03.725 cpu : usr=1.30%, sys=2.70%, ctx=1588, majf=0, minf=1 00:13:03.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:03.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.725 issued rwts: total=563,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:03.725 job3: (groupid=0, jobs=1): err= 0: pid=2458574: Fri Jul 26 14:07:20 2024 00:13:03.725 read: IOPS=513, BW=2053KiB/s (2102kB/s)(2088KiB/1017msec) 00:13:03.725 slat (usec): min=6, max=313, avg=10.17, stdev=13.67 00:13:03.725 clat (usec): min=309, max=41019, avg=1416.52, stdev=6077.80 00:13:03.725 lat (usec): min=316, max=41035, avg=1426.69, stdev=6082.35 00:13:03.725 clat percentiles (usec): 00:13:03.725 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:13:03.725 | 30.00th=[ 371], 40.00th=[ 416], 50.00th=[ 449], 60.00th=[ 474], 00:13:03.725 | 70.00th=[ 502], 80.00th=[ 562], 90.00th=[ 693], 95.00th=[ 717], 00:13:03.725 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:03.725 | 99.99th=[41157] 00:13:03.725 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:13:03.725 slat (nsec): min=8213, max=31781, avg=10523.74, stdev=2727.87 00:13:03.725 clat (usec): min=193, max=781, avg=250.76, stdev=42.14 00:13:03.725 lat (usec): min=202, max=793, avg=261.28, stdev=43.15 00:13:03.725 clat percentiles (usec): 00:13:03.725 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 221], 00:13:03.725 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:13:03.725 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 330], 00:13:03.725 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 449], 99.95th=[ 783], 00:13:03.725 | 99.99th=[ 783] 00:13:03.725 bw ( KiB/s): min= 3440, max= 4752, per=25.43%, avg=4096.00, stdev=927.72, samples=2 00:13:03.725 iops : min= 860, max= 1188, avg=1024.00, stdev=231.93, samples=2 00:13:03.725 lat (usec) : 250=39.07%, 500=50.26%, 750=9.38%, 1000=0.32% 00:13:03.725 lat (msec) : 4=0.13%, 10=0.06%, 50=0.78% 00:13:03.725 cpu : usr=1.08%, sys=1.77%, ctx=1548, majf=0, minf=1 00:13:03.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:03.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.725 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:03.725 00:13:03.725 Run status group 0 (all jobs): 00:13:03.725 READ: bw=8444KiB/s (8647kB/s), 2053KiB/s-2250KiB/s (2102kB/s-2304kB/s), io=8588KiB (8794kB), run=1001-1017msec 00:13:03.725 WRITE: bw=15.7MiB/s (16.5MB/s), 4028KiB/s-4092KiB/s (4124kB/s-4190kB/s), io=16.0MiB (16.8MB), run=1001-1017msec 00:13:03.725 00:13:03.725 Disk stats (read/write): 00:13:03.725 nvme0n1: ios=576/1024, merge=0/0, ticks=1356/241, in_queue=1597, util=98.10% 00:13:03.725 nvme0n2: ios=569/1024, merge=0/0, ticks=1033/249, in_queue=1282, util=95.47% 00:13:03.725 nvme0n3: ios=602/1024, merge=0/0, ticks=855/240, in_queue=1095, util=99.67% 00:13:03.725 nvme0n4: ios=516/1024, merge=0/0, ticks=487/243, in_queue=730, util=89.01% 00:13:03.725 14:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:03.725 [global] 00:13:03.725 thread=1 00:13:03.725 invalidate=1 00:13:03.725 rw=write 00:13:03.725 time_based=1 00:13:03.725 runtime=1 00:13:03.725 ioengine=libaio 00:13:03.725 direct=1 00:13:03.725 bs=4096 00:13:03.725 iodepth=128 00:13:03.725 norandommap=0 00:13:03.725 numjobs=1 00:13:03.725 00:13:03.725 verify_dump=1 00:13:03.725 verify_backlog=512 00:13:03.725 verify_state_save=0 00:13:03.725 do_verify=1 00:13:03.725 verify=crc32c-intel 00:13:03.725 [job0] 00:13:03.725 filename=/dev/nvme0n1 00:13:03.725 [job1] 00:13:03.725 filename=/dev/nvme0n2 00:13:03.725 [job2] 00:13:03.725 filename=/dev/nvme0n3 00:13:03.725 [job3] 00:13:03.725 filename=/dev/nvme0n4 00:13:03.725 Could not set queue depth (nvme0n1) 00:13:03.725 Could not set queue depth (nvme0n2) 00:13:03.725 Could not set queue depth (nvme0n3) 00:13:03.725 Could not set queue depth (nvme0n4) 00:13:03.725 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.725 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.725 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.725 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.725 fio-3.35 00:13:03.725 Starting 4 threads 00:13:05.100 00:13:05.100 job0: (groupid=0, jobs=1): err= 0: pid=2458806: Fri Jul 26 14:07:21 2024 00:13:05.100 read: IOPS=1811, BW=7247KiB/s (7421kB/s)(7580KiB/1046msec) 00:13:05.100 slat (usec): min=2, max=26723, avg=295.70, stdev=2034.62 00:13:05.100 clat (msec): min=7, max=131, avg=35.30, stdev=26.08 00:13:05.100 lat (msec): min=7, max=131, avg=35.60, stdev=26.24 00:13:05.100 clat percentiles (msec): 00:13:05.100 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 15], 20.00th=[ 16], 00:13:05.100 | 30.00th=[ 20], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 30], 00:13:05.100 | 70.00th=[ 42], 80.00th=[ 57], 90.00th=[ 72], 95.00th=[ 88], 00:13:05.100 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:13:05.100 | 99.99th=[ 132] 00:13:05.100 write: IOPS=1957, BW=7832KiB/s (8020kB/s)(8192KiB/1046msec); 0 zone resets 00:13:05.100 slat (usec): min=3, max=18731, avg=204.36, stdev=1359.21 00:13:05.100 clat (usec): min=3908, max=89022, avg=31558.95, stdev=17078.90 00:13:05.100 lat (msec): min=3, max=101, avg=31.76, stdev=17.14 00:13:05.100 clat percentiles (usec): 00:13:05.100 | 1.00th=[ 6063], 5.00th=[10028], 10.00th=[11731], 20.00th=[14877], 00:13:05.100 | 30.00th=[18220], 40.00th=[23725], 50.00th=[31589], 60.00th=[36963], 00:13:05.100 | 70.00th=[39060], 80.00th=[49021], 90.00th=[54264], 95.00th=[60556], 00:13:05.100 | 99.00th=[88605], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:13:05.100 | 99.99th=[88605] 00:13:05.100 bw ( KiB/s): min= 6832, max= 9552, per=16.02%, avg=8192.00, stdev=1923.33, samples=2 00:13:05.100 iops : min= 1708, max= 2388, avg=2048.00, stdev=480.83, samples=2 00:13:05.100 lat (msec) : 4=0.08%, 10=5.02%, 20=27.75%, 50=46.64%, 100=18.94% 00:13:05.100 lat (msec) : 250=1.57% 00:13:05.100 cpu : usr=1.24%, sys=2.11%, ctx=154, majf=0, minf=15 00:13:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.100 issued rwts: total=1895,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.100 job1: (groupid=0, jobs=1): err= 0: pid=2458807: Fri Jul 26 14:07:21 2024 00:13:05.100 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:13:05.100 slat (usec): min=3, max=11857, avg=99.69, stdev=753.84 00:13:05.100 clat (usec): min=1115, max=63394, avg=13700.84, stdev=5336.83 00:13:05.100 lat (usec): min=1136, max=63401, avg=13800.53, stdev=5379.95 00:13:05.100 clat percentiles (usec): 00:13:05.100 | 1.00th=[ 1303], 5.00th=[ 4752], 10.00th=[10159], 20.00th=[11207], 00:13:05.100 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13304], 60.00th=[13960], 00:13:05.100 | 70.00th=[15139], 80.00th=[15795], 90.00th=[19268], 95.00th=[21103], 00:13:05.100 | 99.00th=[25822], 99.50th=[60556], 99.90th=[63177], 99.95th=[63177], 00:13:05.100 | 99.99th=[63177] 00:13:05.100 write: IOPS=4432, BW=17.3MiB/s (18.2MB/s)(17.5MiB/1012msec); 0 zone resets 00:13:05.100 slat (usec): min=5, max=13019, avg=116.11, stdev=786.63 00:13:05.100 clat (usec): min=2922, max=58299, avg=16074.61, stdev=8828.09 00:13:05.100 lat (usec): min=2933, max=58309, avg=16190.72, stdev=8873.50 00:13:05.100 clat percentiles (usec): 00:13:05.100 | 1.00th=[ 5604], 5.00th=[ 7570], 10.00th=[ 8848], 20.00th=[ 9765], 00:13:05.100 | 30.00th=[11207], 40.00th=[12649], 50.00th=[13435], 60.00th=[14222], 00:13:05.100 | 70.00th=[16581], 80.00th=[21890], 90.00th=[26870], 95.00th=[33817], 00:13:05.100 | 99.00th=[51643], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:13:05.100 | 99.99th=[58459] 00:13:05.100 bw ( KiB/s): min=16384, max=18488, per=34.10%, avg=17436.00, stdev=1487.75, samples=2 00:13:05.100 iops : min= 4096, max= 4622, avg=4359.00, stdev=371.94, samples=2 00:13:05.100 lat (msec) : 2=0.82%, 4=1.27%, 10=12.76%, 20=70.11%, 50=14.23% 00:13:05.100 lat (msec) : 100=0.82% 00:13:05.100 cpu : usr=4.85%, sys=5.84%, ctx=323, majf=0, minf=11 00:13:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.100 issued rwts: total=4096,4486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.100 job2: (groupid=0, jobs=1): err= 0: pid=2458808: Fri Jul 26 14:07:21 2024 00:13:05.100 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:13:05.100 slat (usec): min=3, max=13083, avg=130.93, stdev=888.67 00:13:05.100 clat (usec): min=4517, max=50122, avg=15407.26, stdev=5818.34 00:13:05.100 lat (usec): min=4523, max=50131, avg=15538.19, stdev=5881.18 00:13:05.100 clat percentiles (usec): 00:13:05.100 | 1.00th=[ 6980], 5.00th=[10028], 10.00th=[10945], 20.00th=[11863], 00:13:05.100 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13698], 60.00th=[14746], 00:13:05.100 | 70.00th=[16581], 80.00th=[18744], 90.00th=[21103], 95.00th=[27395], 00:13:05.100 | 99.00th=[41157], 99.50th=[43779], 99.90th=[50070], 99.95th=[50070], 00:13:05.100 | 99.99th=[50070] 00:13:05.100 write: IOPS=4362, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1004msec); 0 zone resets 00:13:05.100 slat (usec): min=4, max=10801, avg=95.03, stdev=492.44 00:13:05.100 clat (usec): min=753, max=50127, avg=14684.96, stdev=7720.91 00:13:05.100 lat (usec): min=773, max=50141, avg=14780.00, stdev=7750.42 00:13:05.100 clat percentiles (usec): 00:13:05.100 | 1.00th=[ 1582], 5.00th=[ 5080], 10.00th=[ 6849], 20.00th=[10421], 00:13:05.100 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[13042], 00:13:05.100 | 70.00th=[15533], 80.00th=[21627], 90.00th=[25560], 95.00th=[30802], 00:13:05.100 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:13:05.100 | 99.99th=[50070] 00:13:05.100 bw ( KiB/s): min=16384, max=17640, per=33.27%, avg=17012.00, stdev=888.13, samples=2 00:13:05.100 iops : min= 4096, max= 4410, avg=4253.00, stdev=222.03, samples=2 00:13:05.100 lat (usec) : 1000=0.05% 00:13:05.100 lat (msec) : 2=0.50%, 4=1.01%, 10=9.71%, 20=71.86%, 50=16.79% 00:13:05.100 lat (msec) : 100=0.08% 00:13:05.100 cpu : usr=4.39%, sys=6.78%, ctx=512, majf=0, minf=9 00:13:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.100 issued rwts: total=4096,4380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.100 job3: (groupid=0, jobs=1): err= 0: pid=2458809: Fri Jul 26 14:07:21 2024 00:13:05.100 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:13:05.100 slat (usec): min=2, max=38222, avg=238.42, stdev=2013.93 00:13:05.100 clat (msec): min=3, max=115, avg=32.77, stdev=25.35 00:13:05.100 lat (msec): min=4, max=120, avg=33.01, stdev=25.58 00:13:05.100 clat percentiles (msec): 00:13:05.100 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:13:05.100 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 27], 00:13:05.100 | 70.00th=[ 37], 80.00th=[ 57], 90.00th=[ 79], 95.00th=[ 85], 00:13:05.100 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 109], 99.95th=[ 109], 00:13:05.100 | 99.99th=[ 115] 00:13:05.100 write: IOPS=2440, BW=9764KiB/s (9998kB/s)(9832KiB/1007msec); 0 zone resets 00:13:05.100 slat (usec): min=4, max=18343, avg=195.29, stdev=1329.91 00:13:05.100 clat (usec): min=1182, max=103942, avg=23486.21, stdev=16768.01 00:13:05.100 lat (usec): min=1189, max=103947, avg=23681.50, stdev=16914.90 00:13:05.100 clat percentiles (msec): 00:13:05.100 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 12], 00:13:05.100 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 19], 60.00th=[ 22], 00:13:05.100 | 70.00th=[ 26], 80.00th=[ 38], 90.00th=[ 41], 95.00th=[ 58], 00:13:05.100 | 99.00th=[ 89], 99.50th=[ 101], 99.90th=[ 105], 99.95th=[ 105], 00:13:05.100 | 99.99th=[ 105] 00:13:05.100 bw ( KiB/s): min= 6064, max=12584, per=18.23%, avg=9324.00, stdev=4610.34, samples=2 00:13:05.100 iops : min= 1516, max= 3146, avg=2331.00, stdev=1152.58, samples=2 00:13:05.100 lat (msec) : 2=0.29%, 4=0.36%, 10=7.88%, 20=42.41%, 50=33.67% 00:13:05.100 lat (msec) : 100=14.69%, 250=0.71% 00:13:05.100 cpu : usr=1.09%, sys=2.98%, ctx=183, majf=0, minf=15 00:13:05.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:05.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.100 issued rwts: total=2048,2458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.100 00:13:05.100 Run status group 0 (all jobs): 00:13:05.100 READ: bw=45.3MiB/s (47.5MB/s), 7247KiB/s-15.9MiB/s (7421kB/s-16.7MB/s), io=47.4MiB (49.7MB), run=1004-1046msec 00:13:05.100 WRITE: bw=49.9MiB/s (52.4MB/s), 7832KiB/s-17.3MiB/s (8020kB/s-18.2MB/s), io=52.2MiB (54.8MB), run=1004-1046msec 00:13:05.100 00:13:05.100 Disk stats (read/write): 00:13:05.100 nvme0n1: ios=1638/2048, merge=0/0, ticks=18775/17732, in_queue=36507, util=84.87% 00:13:05.100 nvme0n2: ios=3515/3584, merge=0/0, ticks=45977/57215, in_queue=103192, util=88.28% 00:13:05.100 nvme0n3: ios=3129/3584, merge=0/0, ticks=47978/53965, in_queue=101943, util=95.06% 00:13:05.101 nvme0n4: ios=2021/2048, merge=0/0, ticks=28546/20589, in_queue=49135, util=96.07% 00:13:05.101 14:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:05.101 [global] 00:13:05.101 thread=1 00:13:05.101 invalidate=1 00:13:05.101 rw=randwrite 00:13:05.101 time_based=1 00:13:05.101 runtime=1 00:13:05.101 ioengine=libaio 00:13:05.101 direct=1 00:13:05.101 bs=4096 00:13:05.101 iodepth=128 00:13:05.101 norandommap=0 00:13:05.101 numjobs=1 00:13:05.101 00:13:05.101 verify_dump=1 00:13:05.101 verify_backlog=512 00:13:05.101 verify_state_save=0 00:13:05.101 do_verify=1 00:13:05.101 verify=crc32c-intel 00:13:05.101 [job0] 00:13:05.101 filename=/dev/nvme0n1 00:13:05.101 [job1] 00:13:05.101 filename=/dev/nvme0n2 00:13:05.101 [job2] 00:13:05.101 filename=/dev/nvme0n3 00:13:05.101 [job3] 00:13:05.101 filename=/dev/nvme0n4 00:13:05.101 Could not set queue depth (nvme0n1) 00:13:05.101 Could not set queue depth (nvme0n2) 00:13:05.101 Could not set queue depth (nvme0n3) 00:13:05.101 Could not set queue depth (nvme0n4) 00:13:05.359 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.359 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.359 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.359 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.359 fio-3.35 00:13:05.359 Starting 4 threads 00:13:06.734 00:13:06.734 job0: (groupid=0, jobs=1): err= 0: pid=2459122: Fri Jul 26 14:07:23 2024 00:13:06.734 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:13:06.734 slat (usec): min=2, max=27439, avg=173.99, stdev=1359.01 00:13:06.734 clat (usec): min=4462, max=48230, avg=21752.48, stdev=8143.64 00:13:06.734 lat (usec): min=4470, max=48250, avg=21926.47, stdev=8249.71 00:13:06.734 clat percentiles (usec): 00:13:06.734 | 1.00th=[ 4555], 5.00th=[12256], 10.00th=[13566], 20.00th=[15401], 00:13:06.734 | 30.00th=[16188], 40.00th=[17171], 50.00th=[19268], 60.00th=[22414], 00:13:06.734 | 70.00th=[25560], 80.00th=[27919], 90.00th=[31851], 95.00th=[39584], 00:13:06.734 | 99.00th=[42206], 99.50th=[43254], 99.90th=[45876], 99.95th=[47449], 00:13:06.734 | 99.99th=[47973] 00:13:06.734 write: IOPS=2820, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1012msec); 0 zone resets 00:13:06.734 slat (usec): min=3, max=22993, avg=186.75, stdev=1149.14 00:13:06.734 clat (msec): min=2, max=125, avg=25.28, stdev=23.61 00:13:06.734 lat (msec): min=2, max=125, avg=25.47, stdev=23.76 00:13:06.734 clat percentiles (msec): 00:13:06.734 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 12], 00:13:06.734 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 20], 00:13:06.734 | 70.00th=[ 26], 80.00th=[ 35], 90.00th=[ 59], 95.00th=[ 80], 00:13:06.734 | 99.00th=[ 115], 99.50th=[ 122], 99.90th=[ 126], 99.95th=[ 126], 00:13:06.734 | 99.99th=[ 126] 00:13:06.734 bw ( KiB/s): min= 9508, max=12214, per=20.81%, avg=10861.00, stdev=1913.43, samples=2 00:13:06.734 iops : min= 2377, max= 3053, avg=2715.00, stdev=478.00, samples=2 00:13:06.734 lat (msec) : 4=1.85%, 10=6.37%, 20=48.89%, 50=36.65%, 100=4.78% 00:13:06.734 lat (msec) : 250=1.46% 00:13:06.734 cpu : usr=2.08%, sys=3.36%, ctx=201, majf=0, minf=1 00:13:06.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:06.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:06.734 issued rwts: total=2560,2854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:06.734 job1: (groupid=0, jobs=1): err= 0: pid=2459150: Fri Jul 26 14:07:23 2024 00:13:06.734 read: IOPS=3628, BW=14.2MiB/s (14.9MB/s)(14.8MiB/1043msec) 00:13:06.734 slat (usec): min=2, max=27207, avg=127.62, stdev=859.02 00:13:06.734 clat (usec): min=6444, max=94158, avg=18535.47, stdev=14521.43 00:13:06.734 lat (usec): min=6450, max=94163, avg=18663.09, stdev=14569.00 00:13:06.734 clat percentiles (usec): 00:13:06.734 | 1.00th=[ 7242], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[10683], 00:13:06.734 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12518], 60.00th=[13173], 00:13:06.734 | 70.00th=[18220], 80.00th=[23200], 90.00th=[33817], 95.00th=[53740], 00:13:06.734 | 99.00th=[86508], 99.50th=[89654], 99.90th=[93848], 99.95th=[93848], 00:13:06.734 | 99.99th=[93848] 00:13:06.734 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1043msec); 0 zone resets 00:13:06.734 slat (usec): min=4, max=26264, avg=115.20, stdev=886.45 00:13:06.734 clat (usec): min=5909, max=41304, avg=14210.57, stdev=6538.37 00:13:06.734 lat (usec): min=5915, max=41311, avg=14325.76, stdev=6580.73 00:13:06.734 clat percentiles (usec): 00:13:06.734 | 1.00th=[ 6194], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10290], 00:13:06.734 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:13:06.734 | 70.00th=[13304], 80.00th=[19006], 90.00th=[23462], 95.00th=[31851], 00:13:06.734 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:13:06.734 | 99.99th=[41157] 00:13:06.734 bw ( KiB/s): min=13168, max=19481, per=31.28%, avg=16324.50, stdev=4463.97, samples=2 00:13:06.734 iops : min= 3292, max= 4870, avg=4081.00, stdev=1115.81, samples=2 00:13:06.734 lat (msec) : 10=14.69%, 20=61.67%, 50=20.81%, 100=2.83% 00:13:06.734 cpu : usr=2.88%, sys=4.03%, ctx=323, majf=0, minf=1 00:13:06.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:06.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:06.734 issued rwts: total=3785,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:06.734 job2: (groupid=0, jobs=1): err= 0: pid=2459160: Fri Jul 26 14:07:23 2024 00:13:06.734 read: IOPS=2918, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1007msec) 00:13:06.734 slat (usec): min=3, max=45477, avg=184.77, stdev=1626.57 00:13:06.734 clat (msec): min=3, max=118, avg=22.75, stdev=16.08 00:13:06.734 lat (msec): min=8, max=118, avg=22.94, stdev=16.23 00:13:06.734 clat percentiles (msec): 00:13:06.734 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:13:06.734 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 19], 00:13:06.734 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 38], 95.00th=[ 73], 00:13:06.734 | 99.00th=[ 82], 99.50th=[ 83], 99.90th=[ 85], 99.95th=[ 110], 00:13:06.734 | 99.99th=[ 118] 00:13:06.735 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:13:06.735 slat (usec): min=4, max=19596, avg=139.34, stdev=787.75 00:13:06.735 clat (usec): min=5701, max=49843, avg=18960.28, stdev=8728.55 00:13:06.735 lat (usec): min=5707, max=49850, avg=19099.62, stdev=8773.90 00:13:06.735 clat percentiles (usec): 00:13:06.735 | 1.00th=[ 9372], 5.00th=[10421], 10.00th=[11076], 20.00th=[12911], 00:13:06.735 | 30.00th=[13829], 40.00th=[14615], 50.00th=[15926], 60.00th=[16581], 00:13:06.735 | 70.00th=[18482], 80.00th=[27919], 90.00th=[33817], 95.00th=[38011], 00:13:06.735 | 99.00th=[46400], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:13:06.735 | 99.99th=[50070] 00:13:06.735 bw ( KiB/s): min=12214, max=12263, per=23.45%, avg=12238.50, stdev=34.65, samples=2 00:13:06.735 iops : min= 3053, max= 3065, avg=3059.00, stdev= 8.49, samples=2 00:13:06.735 lat (msec) : 4=0.02%, 10=3.21%, 20=64.96%, 50=28.65%, 100=3.13% 00:13:06.735 lat (msec) : 250=0.03% 00:13:06.735 cpu : usr=2.09%, sys=3.28%, ctx=300, majf=0, minf=1 00:13:06.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:06.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:06.735 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:06.735 job3: (groupid=0, jobs=1): err= 0: pid=2459162: Fri Jul 26 14:07:23 2024 00:13:06.735 read: IOPS=3334, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1004msec) 00:13:06.735 slat (usec): min=3, max=19890, avg=125.15, stdev=1088.24 00:13:06.735 clat (usec): min=1715, max=43974, avg=17097.70, stdev=5925.20 00:13:06.735 lat (usec): min=5099, max=43992, avg=17222.85, stdev=6026.22 00:13:06.735 clat percentiles (usec): 00:13:06.735 | 1.00th=[ 5604], 5.00th=[ 7242], 10.00th=[ 8979], 20.00th=[11600], 00:13:06.735 | 30.00th=[13960], 40.00th=[15270], 50.00th=[17957], 60.00th=[18482], 00:13:06.735 | 70.00th=[19792], 80.00th=[21365], 90.00th=[25297], 95.00th=[26084], 00:13:06.735 | 99.00th=[30016], 99.50th=[33817], 99.90th=[35914], 99.95th=[41157], 00:13:06.735 | 99.99th=[43779] 00:13:06.735 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:13:06.735 slat (usec): min=4, max=15211, avg=127.38, stdev=875.52 00:13:06.735 clat (usec): min=906, max=133753, avg=18933.88, stdev=18699.03 00:13:06.735 lat (usec): min=913, max=135282, avg=19061.26, stdev=18768.16 00:13:06.735 clat percentiles (msec): 00:13:06.735 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 11], 00:13:06.735 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:13:06.735 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 37], 95.00th=[ 50], 00:13:06.735 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 134], 99.95th=[ 134], 00:13:06.735 | 99.99th=[ 134] 00:13:06.735 bw ( KiB/s): min=16310, max=16310, per=31.26%, avg=16310.00, stdev= 0.00, samples=1 00:13:06.735 iops : min= 4077, max= 4077, avg=4077.00, stdev= 0.00, samples=1 00:13:06.735 lat (usec) : 1000=0.06% 00:13:06.735 lat (msec) : 2=0.01%, 4=0.39%, 10=14.95%, 20=59.75%, 50=22.36% 00:13:06.735 lat (msec) : 100=1.46%, 250=1.02% 00:13:06.735 cpu : usr=2.09%, sys=5.58%, ctx=273, majf=0, minf=1 00:13:06.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:06.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:06.735 issued rwts: total=3348,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:06.735 00:13:06.735 Run status group 0 (all jobs): 00:13:06.735 READ: bw=47.3MiB/s (49.6MB/s), 9.88MiB/s-14.2MiB/s (10.4MB/s-14.9MB/s), io=49.3MiB (51.7MB), run=1004-1043msec 00:13:06.735 WRITE: bw=51.0MiB/s (53.4MB/s), 11.0MiB/s-15.3MiB/s (11.6MB/s-16.1MB/s), io=53.1MiB (55.7MB), run=1004-1043msec 00:13:06.735 00:13:06.735 Disk stats (read/write): 00:13:06.735 nvme0n1: ios=2087/2063, merge=0/0, ticks=29022/30638, in_queue=59660, util=84.65% 00:13:06.735 nvme0n2: ios=3106/3553, merge=0/0, ticks=20711/16162, in_queue=36873, util=96.19% 00:13:06.735 nvme0n3: ios=2158/2560, merge=0/0, ticks=24627/17649, in_queue=42276, util=99.13% 00:13:06.735 nvme0n4: ios=2589/2824, merge=0/0, ticks=47267/37859, in_queue=85126, util=96.22% 00:13:06.735 14:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:06.735 14:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2459303 00:13:06.735 14:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:06.735 14:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:06.735 [global] 00:13:06.735 thread=1 00:13:06.735 invalidate=1 00:13:06.735 rw=read 00:13:06.735 time_based=1 00:13:06.735 runtime=10 00:13:06.735 ioengine=libaio 00:13:06.735 direct=1 00:13:06.735 bs=4096 00:13:06.735 iodepth=1 00:13:06.735 norandommap=1 00:13:06.735 numjobs=1 00:13:06.735 00:13:06.735 [job0] 00:13:06.735 filename=/dev/nvme0n1 00:13:06.735 [job1] 00:13:06.735 filename=/dev/nvme0n2 00:13:06.735 [job2] 00:13:06.735 filename=/dev/nvme0n3 00:13:06.735 [job3] 00:13:06.735 filename=/dev/nvme0n4 00:13:06.735 Could not set queue depth (nvme0n1) 00:13:06.735 Could not set queue depth (nvme0n2) 00:13:06.735 Could not set queue depth (nvme0n3) 00:13:06.735 Could not set queue depth (nvme0n4) 00:13:06.993 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.993 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.993 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.993 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.993 fio-3.35 00:13:06.993 Starting 4 threads 00:13:10.273 14:07:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:10.273 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=12898304, buflen=4096 00:13:10.273 fio: pid=2459394, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:10.273 14:07:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:10.273 14:07:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:10.273 14:07:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:10.273 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=31117312, buflen=4096 00:13:10.273 fio: pid=2459393, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:10.838 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17907712, buflen=4096 00:13:10.838 fio: pid=2459391, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:10.838 14:07:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:10.838 14:07:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:11.096 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=40058880, buflen=4096 00:13:11.096 fio: pid=2459392, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:11.096 14:07:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:11.096 14:07:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:11.096 00:13:11.096 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2459391: Fri Jul 26 14:07:27 2024 00:13:11.096 read: IOPS=1213, BW=4854KiB/s (4970kB/s)(17.1MiB/3603msec) 00:13:11.096 slat (usec): min=6, max=12833, avg=18.71, stdev=325.40 00:13:11.096 clat (usec): min=277, max=41989, avg=797.49, stdev=4157.14 00:13:11.096 lat (usec): min=285, max=54001, avg=816.21, stdev=4198.25 00:13:11.096 clat percentiles (usec): 00:13:11.096 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:13:11.096 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 367], 00:13:11.096 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 449], 95.00th=[ 510], 00:13:11.096 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:11.096 | 99.99th=[42206] 00:13:11.096 bw ( KiB/s): min= 96, max=11272, per=19.48%, avg=4886.00, stdev=3969.59, samples=7 00:13:11.096 iops : min= 24, max= 2818, avg=1221.43, stdev=992.41, samples=7 00:13:11.097 lat (usec) : 500=93.99%, 750=4.80% 00:13:11.097 lat (msec) : 2=0.11%, 10=0.02%, 50=1.05% 00:13:11.097 cpu : usr=0.94%, sys=1.75%, ctx=4380, majf=0, minf=1 00:13:11.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.097 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.097 issued rwts: total=4373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.097 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2459392: Fri Jul 26 14:07:27 2024 00:13:11.097 read: IOPS=2463, BW=9854KiB/s (10.1MB/s)(38.2MiB/3970msec) 00:13:11.097 slat (usec): min=5, max=14803, avg=17.72, stdev=294.66 00:13:11.097 clat (usec): min=270, max=40998, avg=383.09, stdev=642.97 00:13:11.097 lat (usec): min=275, max=41007, avg=400.81, stdev=708.73 00:13:11.097 clat percentiles (usec): 00:13:11.097 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:13:11.097 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 363], 00:13:11.097 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 465], 95.00th=[ 510], 00:13:11.097 | 99.00th=[ 594], 99.50th=[ 644], 99.90th=[ 930], 99.95th=[ 5669], 00:13:11.097 | 99.99th=[41157] 00:13:11.097 bw ( KiB/s): min= 8968, max=11432, per=39.93%, avg=10016.71, stdev=857.98, samples=7 00:13:11.097 iops : min= 2242, max= 2858, avg=2504.14, stdev=214.54, samples=7 00:13:11.097 lat (usec) : 500=94.04%, 750=5.76%, 1000=0.10% 00:13:11.097 lat (msec) : 2=0.04%, 10=0.01%, 20=0.01%, 50=0.03% 00:13:11.097 cpu : usr=1.46%, sys=3.73%, ctx=9789, majf=0, minf=1 00:13:11.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.097 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.097 issued rwts: total=9781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.097 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2459393: Fri Jul 26 14:07:27 2024 00:13:11.097 read: IOPS=2307, BW=9228KiB/s (9450kB/s)(29.7MiB/3293msec) 00:13:11.097 slat (usec): min=7, max=15515, avg=14.85, stdev=229.64 00:13:11.097 clat (usec): min=290, max=41011, avg=413.00, stdev=1140.97 00:13:11.097 lat (usec): min=300, max=41029, avg=427.85, stdev=1164.24 00:13:11.097 clat percentiles (usec): 00:13:11.097 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 334], 00:13:11.097 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 371], 00:13:11.097 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 461], 95.00th=[ 502], 00:13:11.097 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[ 2114], 99.95th=[41157], 00:13:11.097 | 99.99th=[41157] 00:13:11.097 bw ( KiB/s): min= 8360, max=11216, per=39.24%, avg=9844.00, stdev=1187.05, samples=6 00:13:11.097 iops : min= 2090, max= 2804, avg=2461.00, stdev=296.76, samples=6 00:13:11.097 lat (usec) : 500=94.79%, 750=4.94%, 1000=0.08% 00:13:11.097 lat (msec) : 2=0.08%, 4=0.03%, 50=0.08% 00:13:11.097 cpu : usr=1.49%, sys=3.25%, ctx=7602, majf=0, minf=1 00:13:11.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.097 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.097 issued rwts: total=7598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.097 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2459394: Fri Jul 26 14:07:27 2024 00:13:11.097 read: IOPS=1085, BW=4342KiB/s (4446kB/s)(12.3MiB/2901msec) 00:13:11.097 slat (nsec): min=5509, max=54268, avg=15896.11, stdev=4386.19 00:13:11.097 clat (usec): min=292, max=41965, avg=898.45, stdev=4290.92 00:13:11.097 lat (usec): min=301, max=41980, avg=914.34, stdev=4291.19 00:13:11.097 clat percentiles (usec): 00:13:11.097 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 371], 00:13:11.097 | 30.00th=[ 396], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 445], 00:13:11.097 | 70.00th=[ 465], 80.00th=[ 498], 90.00th=[ 537], 95.00th=[ 594], 00:13:11.097 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:13:11.097 | 99.99th=[42206] 00:13:11.097 bw ( KiB/s): min= 520, max= 8536, per=14.49%, avg=3635.20, stdev=3114.03, samples=5 00:13:11.097 iops : min= 130, max= 2134, avg=908.80, stdev=778.51, samples=5 00:13:11.097 lat (usec) : 500=80.54%, 750=17.94%, 1000=0.10% 00:13:11.097 lat (msec) : 2=0.25%, 50=1.14% 00:13:11.097 cpu : usr=0.69%, sys=2.03%, ctx=3153, majf=0, minf=1 00:13:11.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.097 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.097 issued rwts: total=3150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.097 00:13:11.097 Run status group 0 (all jobs): 00:13:11.097 READ: bw=24.5MiB/s (25.7MB/s), 4342KiB/s-9854KiB/s (4446kB/s-10.1MB/s), io=97.3MiB (102MB), run=2901-3970msec 00:13:11.097 00:13:11.097 Disk stats (read/write): 00:13:11.097 nvme0n1: ios=4373/0, merge=0/0, ticks=3463/0, in_queue=3463, util=94.93% 00:13:11.097 nvme0n2: ios=9463/0, merge=0/0, ticks=3583/0, in_queue=3583, util=95.00% 00:13:11.097 nvme0n3: ios=7595/0, merge=0/0, ticks=3109/0, in_queue=3109, util=99.19% 00:13:11.097 nvme0n4: ios=3078/0, merge=0/0, ticks=3839/0, in_queue=3839, util=100.00% 00:13:11.355 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:11.355 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:11.920 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:11.920 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:12.485 14:07:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:12.485 14:07:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:13.050 14:07:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:13.050 14:07:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2459303 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:13.615 nvmf hotplug test: fio failed as expected 00:13:13.615 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.182 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.182 rmmod nvme_tcp 00:13:14.182 rmmod nvme_fabrics 00:13:14.182 rmmod nvme_keyring 00:13:14.182 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.182 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:14.182 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:14.182 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2456992 ']' 00:13:14.182 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2456992 00:13:14.182 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2456992 ']' 00:13:14.182 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2456992 00:13:14.182 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:13:14.440 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.440 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2456992 00:13:14.440 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.440 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.440 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2456992' 00:13:14.440 killing process with pid 2456992 00:13:14.440 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2456992 00:13:14.440 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2456992 00:13:14.699 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.699 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.699 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.699 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.699 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.699 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.699 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.699 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.605 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.605 00:13:16.605 real 0m29.223s 00:13:16.605 user 1m46.329s 00:13:16.605 sys 0m8.050s 00:13:16.605 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.605 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.605 ************************************ 00:13:16.605 END TEST nvmf_fio_target 00:13:16.605 ************************************ 00:13:16.605 14:07:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:16.605 14:07:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:16.605 14:07:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.605 14:07:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:16.864 ************************************ 00:13:16.864 START TEST nvmf_bdevio 00:13:16.864 ************************************ 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:16.864 * Looking for test storage... 00:13:16.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.864 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.865 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.865 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.865 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:19.398 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:19.398 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:19.398 Found net devices under 0000:84:00.0: cvl_0_0 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:19.398 Found net devices under 0000:84:00.1: cvl_0_1 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.398 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:13:19.657 00:13:19.657 --- 10.0.0.2 ping statistics --- 00:13:19.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.657 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:13:19.657 00:13:19.657 --- 10.0.0.1 ping statistics --- 00:13:19.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.657 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2462291 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2462291 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2462291 ']' 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.657 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.658 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.658 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.658 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.658 [2024-07-26 14:07:36.394633] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:13:19.658 [2024-07-26 14:07:36.394723] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.658 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.658 [2024-07-26 14:07:36.471008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.916 [2024-07-26 14:07:36.595350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.916 [2024-07-26 14:07:36.595411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.916 [2024-07-26 14:07:36.595435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.916 [2024-07-26 14:07:36.595451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.916 [2024-07-26 14:07:36.595462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.916 [2024-07-26 14:07:36.595554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:19.916 [2024-07-26 14:07:36.595612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:19.916 [2024-07-26 14:07:36.595869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:19.916 [2024-07-26 14:07:36.595875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:19.916 [2024-07-26 14:07:36.775545] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.916 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 Malloc0 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 [2024-07-26 14:07:36.833349] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:20.175 { 00:13:20.175 "params": { 00:13:20.175 "name": "Nvme$subsystem", 00:13:20.175 "trtype": "$TEST_TRANSPORT", 00:13:20.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:20.175 "adrfam": "ipv4", 00:13:20.175 "trsvcid": "$NVMF_PORT", 00:13:20.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:20.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:20.175 "hdgst": ${hdgst:-false}, 00:13:20.175 "ddgst": ${ddgst:-false} 00:13:20.175 }, 00:13:20.175 "method": "bdev_nvme_attach_controller" 00:13:20.175 } 00:13:20.175 EOF 00:13:20.175 )") 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:20.175 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:20.175 "params": { 00:13:20.175 "name": "Nvme1", 00:13:20.175 "trtype": "tcp", 00:13:20.175 "traddr": "10.0.0.2", 00:13:20.175 "adrfam": "ipv4", 00:13:20.175 "trsvcid": "4420", 00:13:20.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.175 "hdgst": false, 00:13:20.175 "ddgst": false 00:13:20.175 }, 00:13:20.175 "method": "bdev_nvme_attach_controller" 00:13:20.175 }' 00:13:20.175 [2024-07-26 14:07:36.885276] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:13:20.175 [2024-07-26 14:07:36.885358] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462327 ] 00:13:20.175 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.175 [2024-07-26 14:07:36.955119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.461 [2024-07-26 14:07:37.081848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.461 [2024-07-26 14:07:37.081905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.461 [2024-07-26 14:07:37.081910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.461 I/O targets: 00:13:20.461 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:20.461 00:13:20.461 00:13:20.461 CUnit - A unit testing framework for C - Version 2.1-3 00:13:20.461 http://cunit.sourceforge.net/ 00:13:20.461 00:13:20.461 00:13:20.461 Suite: bdevio tests on: Nvme1n1 00:13:20.461 Test: blockdev write read block ...passed 00:13:20.734 Test: blockdev write zeroes read block ...passed 00:13:20.734 Test: blockdev write zeroes read no split ...passed 00:13:20.734 Test: blockdev write zeroes read split ...passed 00:13:20.734 Test: blockdev write zeroes read split partial ...passed 00:13:20.734 Test: blockdev reset ...[2024-07-26 14:07:37.483676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:20.734 [2024-07-26 14:07:37.483792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f80bd0 (9): Bad file descriptor 00:13:20.734 [2024-07-26 14:07:37.495509] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:20.734 passed 00:13:20.734 Test: blockdev write read 8 blocks ...passed 00:13:20.734 Test: blockdev write read size > 128k ...passed 00:13:20.734 Test: blockdev write read invalid size ...passed 00:13:20.734 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:20.734 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:20.734 Test: blockdev write read max offset ...passed 00:13:20.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:20.992 Test: blockdev writev readv 8 blocks ...passed 00:13:20.992 Test: blockdev writev readv 30 x 1block ...passed 00:13:20.992 Test: blockdev writev readv block ...passed 00:13:20.992 Test: blockdev writev readv size > 128k ...passed 00:13:20.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:20.992 Test: blockdev comparev and writev ...[2024-07-26 14:07:37.718135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.992 [2024-07-26 14:07:37.718175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.718202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.992 [2024-07-26 14:07:37.718220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.718781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.992 [2024-07-26 14:07:37.718808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.718832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.992 [2024-07-26 14:07:37.718850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.719412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.992 [2024-07-26 14:07:37.719445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.719470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.992 [2024-07-26 14:07:37.719488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.720050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.992 [2024-07-26 14:07:37.720076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.720100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.992 [2024-07-26 14:07:37.720117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:20.992 passed 00:13:20.992 Test: blockdev nvme passthru rw ...passed 00:13:20.992 Test: blockdev nvme passthru vendor specific ...[2024-07-26 14:07:37.802951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:20.992 [2024-07-26 14:07:37.802980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.803330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:20.992 [2024-07-26 14:07:37.803356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.803599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:20.992 [2024-07-26 14:07:37.803625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:20.992 [2024-07-26 14:07:37.803872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:20.992 [2024-07-26 14:07:37.803897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:20.992 passed 00:13:20.992 Test: blockdev nvme admin passthru ...passed 00:13:20.992 Test: blockdev copy ...passed 00:13:20.992 00:13:20.992 Run Summary: Type Total Ran Passed Failed Inactive 00:13:20.992 suites 1 1 n/a 0 0 00:13:20.992 tests 23 23 23 0 0 00:13:20.992 asserts 152 152 152 0 n/a 00:13:20.992 00:13:20.992 Elapsed time = 1.160 seconds 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.250 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.250 rmmod nvme_tcp 00:13:21.509 rmmod nvme_fabrics 00:13:21.509 rmmod nvme_keyring 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2462291 ']' 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2462291 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2462291 ']' 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2462291 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2462291 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2462291' 00:13:21.509 killing process with pid 2462291 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2462291 00:13:21.509 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2462291 00:13:21.769 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:21.769 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:21.769 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:21.769 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.769 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.769 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.769 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.769 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.307 00:13:24.307 real 0m7.186s 00:13:24.307 user 0m10.688s 00:13:24.307 sys 0m2.650s 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:24.307 ************************************ 00:13:24.307 END TEST nvmf_bdevio 00:13:24.307 ************************************ 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:24.307 00:13:24.307 real 4m21.100s 00:13:24.307 user 11m18.917s 00:13:24.307 sys 1m19.918s 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:24.307 ************************************ 00:13:24.307 END TEST nvmf_target_core 00:13:24.307 ************************************ 00:13:24.307 14:07:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:24.307 14:07:40 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.307 14:07:40 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.307 14:07:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.307 ************************************ 00:13:24.307 START TEST nvmf_target_extra 00:13:24.307 ************************************ 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:24.307 * Looking for test storage... 00:13:24.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.307 ************************************ 00:13:24.307 START TEST nvmf_example 00:13:24.307 ************************************ 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:24.307 * Looking for test storage... 00:13:24.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.307 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.308 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.308 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:13:27.598 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:27.599 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:27.599 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:27.599 Found net devices under 0000:84:00.0: cvl_0_0 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:27.599 Found net devices under 0000:84:00.1: cvl_0_1 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:13:27.599 00:13:27.599 --- 10.0.0.2 ping statistics --- 00:13:27.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.599 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:13:27.599 00:13:27.599 --- 10.0.0.1 ping statistics --- 00:13:27.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.599 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.599 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2464592 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2464592 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2464592 ']' 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.600 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.600 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:27.600 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:27.600 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.814 Initializing NVMe Controllers 00:13:39.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:39.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:39.814 Initialization complete. Launching workers. 00:13:39.814 ======================================================== 00:13:39.814 Latency(us) 00:13:39.814 Device Information : IOPS MiB/s Average min max 00:13:39.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14342.60 56.03 4462.37 762.33 16453.22 00:13:39.814 ======================================================== 00:13:39.814 Total : 14342.60 56.03 4462.37 762.33 16453.22 00:13:39.814 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.814 rmmod nvme_tcp 00:13:39.814 rmmod nvme_fabrics 00:13:39.814 rmmod nvme_keyring 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2464592 ']' 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2464592 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2464592 ']' 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2464592 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2464592 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2464592' 00:13:39.814 killing process with pid 2464592 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2464592 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2464592 00:13:39.814 nvmf threads initialize successfully 00:13:39.814 bdev subsystem init successfully 00:13:39.814 created a nvmf target service 00:13:39.814 create targets's poll groups done 00:13:39.814 all subsystems of target started 00:13:39.814 nvmf target is running 00:13:39.814 all subsystems of target stopped 00:13:39.814 destroy targets's poll groups done 00:13:39.814 destroyed the nvmf target service 00:13:39.814 bdev subsystem finish successfully 00:13:39.814 nvmf threads destroy successfully 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.814 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.391 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.391 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:40.391 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:40.391 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:40.391 00:13:40.391 real 0m16.114s 00:13:40.391 user 0m42.500s 00:13:40.391 sys 0m4.002s 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:40.391 ************************************ 00:13:40.391 END TEST nvmf_example 00:13:40.391 ************************************ 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.391 ************************************ 00:13:40.391 START TEST nvmf_filesystem 00:13:40.391 ************************************ 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:40.391 * Looking for test storage... 00:13:40.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:40.391 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:40.392 #define SPDK_CONFIG_H 00:13:40.392 #define SPDK_CONFIG_APPS 1 00:13:40.392 #define SPDK_CONFIG_ARCH native 00:13:40.392 #undef SPDK_CONFIG_ASAN 00:13:40.392 #undef SPDK_CONFIG_AVAHI 00:13:40.392 #undef SPDK_CONFIG_CET 00:13:40.392 #define SPDK_CONFIG_COVERAGE 1 00:13:40.392 #define SPDK_CONFIG_CROSS_PREFIX 00:13:40.392 #undef SPDK_CONFIG_CRYPTO 00:13:40.392 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:40.392 #undef SPDK_CONFIG_CUSTOMOCF 00:13:40.392 #undef SPDK_CONFIG_DAOS 00:13:40.392 #define SPDK_CONFIG_DAOS_DIR 00:13:40.392 #define SPDK_CONFIG_DEBUG 1 00:13:40.392 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:40.392 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:40.392 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:40.392 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:40.392 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:40.392 #undef SPDK_CONFIG_DPDK_UADK 00:13:40.392 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:40.392 #define SPDK_CONFIG_EXAMPLES 1 00:13:40.392 #undef SPDK_CONFIG_FC 00:13:40.392 #define SPDK_CONFIG_FC_PATH 00:13:40.392 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:40.392 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:40.392 #undef SPDK_CONFIG_FUSE 00:13:40.392 #undef SPDK_CONFIG_FUZZER 00:13:40.392 #define SPDK_CONFIG_FUZZER_LIB 00:13:40.392 #undef SPDK_CONFIG_GOLANG 00:13:40.392 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:40.392 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:40.392 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:40.392 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:40.392 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:40.392 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:40.392 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:40.392 #define SPDK_CONFIG_IDXD 1 00:13:40.392 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:40.392 #undef SPDK_CONFIG_IPSEC_MB 00:13:40.392 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:40.392 #define SPDK_CONFIG_ISAL 1 00:13:40.392 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:40.392 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:40.392 #define SPDK_CONFIG_LIBDIR 00:13:40.392 #undef SPDK_CONFIG_LTO 00:13:40.392 #define SPDK_CONFIG_MAX_LCORES 128 00:13:40.392 #define SPDK_CONFIG_NVME_CUSE 1 00:13:40.392 #undef SPDK_CONFIG_OCF 00:13:40.392 #define SPDK_CONFIG_OCF_PATH 00:13:40.392 #define SPDK_CONFIG_OPENSSL_PATH 00:13:40.392 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:40.392 #define SPDK_CONFIG_PGO_DIR 00:13:40.392 #undef SPDK_CONFIG_PGO_USE 00:13:40.392 #define SPDK_CONFIG_PREFIX /usr/local 00:13:40.392 #undef SPDK_CONFIG_RAID5F 00:13:40.392 #undef SPDK_CONFIG_RBD 00:13:40.392 #define SPDK_CONFIG_RDMA 1 00:13:40.392 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:40.392 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:40.392 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:40.392 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:40.392 #define SPDK_CONFIG_SHARED 1 00:13:40.392 #undef SPDK_CONFIG_SMA 00:13:40.392 #define SPDK_CONFIG_TESTS 1 00:13:40.392 #undef SPDK_CONFIG_TSAN 00:13:40.392 #define SPDK_CONFIG_UBLK 1 00:13:40.392 #define SPDK_CONFIG_UBSAN 1 00:13:40.392 #undef SPDK_CONFIG_UNIT_TESTS 00:13:40.392 #undef SPDK_CONFIG_URING 00:13:40.392 #define SPDK_CONFIG_URING_PATH 00:13:40.392 #undef SPDK_CONFIG_URING_ZNS 00:13:40.392 #undef SPDK_CONFIG_USDT 00:13:40.392 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:40.392 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:40.392 #define SPDK_CONFIG_VFIO_USER 1 00:13:40.392 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:40.392 #define SPDK_CONFIG_VHOST 1 00:13:40.392 #define SPDK_CONFIG_VIRTIO 1 00:13:40.392 #undef SPDK_CONFIG_VTUNE 00:13:40.392 #define SPDK_CONFIG_VTUNE_DIR 00:13:40.392 #define SPDK_CONFIG_WERROR 1 00:13:40.392 #define SPDK_CONFIG_WPDK_DIR 00:13:40.392 #undef SPDK_CONFIG_XNVME 00:13:40.392 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:40.392 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2466272 ]] 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2466272 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.BplQnw 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.BplQnw/tests/target /tmp/spdk.BplQnw 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=949354496 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4335075328 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=38608969728 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=45083295744 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6474326016 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22531727360 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=8994226176 00:13:40.393 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=9016659968 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22433792 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22540726272 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=921600 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4508323840 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4508327936 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:13:40.394 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:13:40.653 * Looking for test storage... 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=38608969728 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8688918528 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.653 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.654 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:43.188 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:43.188 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:43.188 Found net devices under 0000:84:00.0: cvl_0_0 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.188 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:43.189 Found net devices under 0000:84:00.1: cvl_0_1 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.189 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:13:43.189 00:13:43.189 --- 10.0.0.2 ping statistics --- 00:13:43.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.189 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:13:43.189 00:13:43.189 --- 10.0.0.1 ping statistics --- 00:13:43.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.189 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.189 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.448 ************************************ 00:13:43.448 START TEST nvmf_filesystem_no_in_capsule 00:13:43.448 ************************************ 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2467920 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2467920 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2467920 ']' 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.448 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.448 [2024-07-26 14:08:00.151295] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:13:43.448 [2024-07-26 14:08:00.151370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.448 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.448 [2024-07-26 14:08:00.222783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.706 [2024-07-26 14:08:00.348525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.706 [2024-07-26 14:08:00.348579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.706 [2024-07-26 14:08:00.348597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.706 [2024-07-26 14:08:00.348610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.706 [2024-07-26 14:08:00.348621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.706 [2024-07-26 14:08:00.348678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.706 [2024-07-26 14:08:00.348730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.706 [2024-07-26 14:08:00.348794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.706 [2024-07-26 14:08:00.348797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.641 [2024-07-26 14:08:01.464260] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.641 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.899 Malloc1 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.899 [2024-07-26 14:08:01.647297] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.899 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:44.900 { 00:13:44.900 "name": "Malloc1", 00:13:44.900 "aliases": [ 00:13:44.900 "8ce03953-4141-4613-8e94-866b36088e14" 00:13:44.900 ], 00:13:44.900 "product_name": "Malloc disk", 00:13:44.900 "block_size": 512, 00:13:44.900 "num_blocks": 1048576, 00:13:44.900 "uuid": "8ce03953-4141-4613-8e94-866b36088e14", 00:13:44.900 "assigned_rate_limits": { 00:13:44.900 "rw_ios_per_sec": 0, 00:13:44.900 "rw_mbytes_per_sec": 0, 00:13:44.900 "r_mbytes_per_sec": 0, 00:13:44.900 "w_mbytes_per_sec": 0 00:13:44.900 }, 00:13:44.900 "claimed": true, 00:13:44.900 "claim_type": "exclusive_write", 00:13:44.900 "zoned": false, 00:13:44.900 "supported_io_types": { 00:13:44.900 "read": true, 00:13:44.900 "write": true, 00:13:44.900 "unmap": true, 00:13:44.900 "flush": true, 00:13:44.900 "reset": true, 00:13:44.900 "nvme_admin": false, 00:13:44.900 "nvme_io": false, 00:13:44.900 "nvme_io_md": false, 00:13:44.900 "write_zeroes": true, 00:13:44.900 "zcopy": true, 00:13:44.900 "get_zone_info": false, 00:13:44.900 "zone_management": false, 00:13:44.900 "zone_append": false, 00:13:44.900 "compare": false, 00:13:44.900 "compare_and_write": false, 00:13:44.900 "abort": true, 00:13:44.900 "seek_hole": false, 00:13:44.900 "seek_data": false, 00:13:44.900 "copy": true, 00:13:44.900 "nvme_iov_md": false 00:13:44.900 }, 00:13:44.900 "memory_domains": [ 00:13:44.900 { 00:13:44.900 "dma_device_id": "system", 00:13:44.900 "dma_device_type": 1 00:13:44.900 }, 00:13:44.900 { 00:13:44.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.900 "dma_device_type": 2 00:13:44.900 } 00:13:44.900 ], 00:13:44.900 "driver_specific": {} 00:13:44.900 } 00:13:44.900 ]' 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:44.900 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.464 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.464 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.464 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.464 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:45.464 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.023 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:48.024 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.397 ************************************ 00:13:49.397 START TEST filesystem_ext4 00:13:49.397 ************************************ 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:49.397 14:08:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:49.397 mke2fs 1.46.5 (30-Dec-2021) 00:13:49.397 Discarding device blocks: 0/522240 done 00:13:49.397 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:49.397 Filesystem UUID: af27ea85-ba2b-4dd3-a106-756247c2c88b 00:13:49.397 Superblock backups stored on blocks: 00:13:49.397 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:49.397 00:13:49.397 Allocating group tables: 0/64 done 00:13:49.397 Writing inode tables: 0/64 done 00:13:51.553 Creating journal (8192 blocks): done 00:13:52.486 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:13:52.486 00:13:52.486 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:52.486 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:52.744 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2467920 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:53.002 00:13:53.002 real 0m3.799s 00:13:53.002 user 0m0.016s 00:13:53.002 sys 0m0.065s 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:53.002 ************************************ 00:13:53.002 END TEST filesystem_ext4 00:13:53.002 ************************************ 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:53.002 ************************************ 00:13:53.002 START TEST filesystem_btrfs 00:13:53.002 ************************************ 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:53.002 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:53.260 btrfs-progs v6.6.2 00:13:53.260 See https://btrfs.readthedocs.io for more information. 00:13:53.260 00:13:53.260 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:53.260 NOTE: several default settings have changed in version 5.15, please make sure 00:13:53.260 this does not affect your deployments: 00:13:53.261 - DUP for metadata (-m dup) 00:13:53.261 - enabled no-holes (-O no-holes) 00:13:53.261 - enabled free-space-tree (-R free-space-tree) 00:13:53.261 00:13:53.261 Label: (null) 00:13:53.261 UUID: a6dd95e2-ebd8-4345-870f-50fe370bfede 00:13:53.261 Node size: 16384 00:13:53.261 Sector size: 4096 00:13:53.261 Filesystem size: 510.00MiB 00:13:53.261 Block group profiles: 00:13:53.261 Data: single 8.00MiB 00:13:53.261 Metadata: DUP 32.00MiB 00:13:53.261 System: DUP 8.00MiB 00:13:53.261 SSD detected: yes 00:13:53.261 Zoned device: no 00:13:53.261 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:53.261 Runtime features: free-space-tree 00:13:53.261 Checksum: crc32c 00:13:53.261 Number of devices: 1 00:13:53.261 Devices: 00:13:53.261 ID SIZE PATH 00:13:53.261 1 510.00MiB /dev/nvme0n1p1 00:13:53.261 00:13:53.261 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:53.261 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2467920 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:54.194 00:13:54.194 real 0m1.158s 00:13:54.194 user 0m0.018s 00:13:54.194 sys 0m0.119s 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:54.194 ************************************ 00:13:54.194 END TEST filesystem_btrfs 00:13:54.194 ************************************ 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:54.194 ************************************ 00:13:54.194 START TEST filesystem_xfs 00:13:54.194 ************************************ 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:54.194 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:54.195 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:54.195 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:54.195 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:54.195 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:54.195 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:54.195 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:54.452 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:54.452 = sectsz=512 attr=2, projid32bit=1 00:13:54.452 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:54.452 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:54.452 data = bsize=4096 blocks=130560, imaxpct=25 00:13:54.452 = sunit=0 swidth=0 blks 00:13:54.452 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:54.452 log =internal log bsize=4096 blocks=16384, version=2 00:13:54.452 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:54.452 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:55.385 Discarding blocks...Done. 00:13:55.385 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:55.385 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2467920 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:57.912 00:13:57.912 real 0m3.668s 00:13:57.912 user 0m0.017s 00:13:57.912 sys 0m0.064s 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:57.912 ************************************ 00:13:57.912 END TEST filesystem_xfs 00:13:57.912 ************************************ 00:13:57.912 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:58.170 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:58.170 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2467920 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2467920 ']' 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2467920 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2467920 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2467920' 00:13:58.428 killing process with pid 2467920 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2467920 00:13:58.428 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2467920 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:58.994 00:13:58.994 real 0m15.599s 00:13:58.994 user 1m0.526s 00:13:58.994 sys 0m2.032s 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.994 ************************************ 00:13:58.994 END TEST nvmf_filesystem_no_in_capsule 00:13:58.994 ************************************ 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.994 ************************************ 00:13:58.994 START TEST nvmf_filesystem_in_capsule 00:13:58.994 ************************************ 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2469998 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2469998 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2469998 ']' 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.994 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:58.994 [2024-07-26 14:08:15.817361] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:13:58.994 [2024-07-26 14:08:15.817466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.994 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.253 [2024-07-26 14:08:15.894313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.253 [2024-07-26 14:08:16.018321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.253 [2024-07-26 14:08:16.018383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.253 [2024-07-26 14:08:16.018399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.253 [2024-07-26 14:08:16.018413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.253 [2024-07-26 14:08:16.018424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.253 [2024-07-26 14:08:16.018497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.253 [2024-07-26 14:08:16.018554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.253 [2024-07-26 14:08:16.018608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.253 [2024-07-26 14:08:16.018612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.511 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.511 [2024-07-26 14:08:16.188331] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.512 Malloc1 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.512 [2024-07-26 14:08:16.386585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.512 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:59.769 { 00:13:59.769 "name": "Malloc1", 00:13:59.769 "aliases": [ 00:13:59.769 "662bbdb4-9d6f-43e2-9413-d23a0f3113a3" 00:13:59.769 ], 00:13:59.769 "product_name": "Malloc disk", 00:13:59.769 "block_size": 512, 00:13:59.769 "num_blocks": 1048576, 00:13:59.769 "uuid": "662bbdb4-9d6f-43e2-9413-d23a0f3113a3", 00:13:59.769 "assigned_rate_limits": { 00:13:59.769 "rw_ios_per_sec": 0, 00:13:59.769 "rw_mbytes_per_sec": 0, 00:13:59.769 "r_mbytes_per_sec": 0, 00:13:59.769 "w_mbytes_per_sec": 0 00:13:59.769 }, 00:13:59.769 "claimed": true, 00:13:59.769 "claim_type": "exclusive_write", 00:13:59.769 "zoned": false, 00:13:59.769 "supported_io_types": { 00:13:59.769 "read": true, 00:13:59.769 "write": true, 00:13:59.769 "unmap": true, 00:13:59.769 "flush": true, 00:13:59.769 "reset": true, 00:13:59.769 "nvme_admin": false, 00:13:59.769 "nvme_io": false, 00:13:59.769 "nvme_io_md": false, 00:13:59.769 "write_zeroes": true, 00:13:59.769 "zcopy": true, 00:13:59.769 "get_zone_info": false, 00:13:59.769 "zone_management": false, 00:13:59.769 "zone_append": false, 00:13:59.769 "compare": false, 00:13:59.769 "compare_and_write": false, 00:13:59.769 "abort": true, 00:13:59.769 "seek_hole": false, 00:13:59.769 "seek_data": false, 00:13:59.769 "copy": true, 00:13:59.769 "nvme_iov_md": false 00:13:59.769 }, 00:13:59.769 "memory_domains": [ 00:13:59.769 { 00:13:59.769 "dma_device_id": "system", 00:13:59.769 "dma_device_type": 1 00:13:59.769 }, 00:13:59.769 { 00:13:59.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.769 "dma_device_type": 2 00:13:59.769 } 00:13:59.769 ], 00:13:59.769 "driver_specific": {} 00:13:59.769 } 00:13:59.769 ]' 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:59.769 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:59.770 14:08:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.335 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.335 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:00.335 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.335 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:00.335 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.861 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.861 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.861 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.861 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:02.861 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.861 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:02.861 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:02.862 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:04.275 ************************************ 00:14:04.275 START TEST filesystem_in_capsule_ext4 00:14:04.275 ************************************ 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:14:04.275 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:04.275 mke2fs 1.46.5 (30-Dec-2021) 00:14:04.275 Discarding device blocks: 0/522240 done 00:14:04.275 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:04.275 Filesystem UUID: ef5a928d-4e24-4f82-b48f-716b6a06c9e5 00:14:04.275 Superblock backups stored on blocks: 00:14:04.275 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:04.275 00:14:04.275 Allocating group tables: 0/64 done 00:14:04.275 Writing inode tables: 0/64 done 00:14:05.647 Creating journal (8192 blocks): done 00:14:06.470 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:14:06.470 00:14:06.470 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:14:06.470 14:08:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2469998 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:07.403 00:14:07.403 real 0m3.414s 00:14:07.403 user 0m0.021s 00:14:07.403 sys 0m0.062s 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:07.403 ************************************ 00:14:07.403 END TEST filesystem_in_capsule_ext4 00:14:07.403 ************************************ 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:07.403 ************************************ 00:14:07.403 START TEST filesystem_in_capsule_btrfs 00:14:07.403 ************************************ 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:14:07.403 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:07.661 btrfs-progs v6.6.2 00:14:07.661 See https://btrfs.readthedocs.io for more information. 00:14:07.661 00:14:07.661 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:07.661 NOTE: several default settings have changed in version 5.15, please make sure 00:14:07.661 this does not affect your deployments: 00:14:07.661 - DUP for metadata (-m dup) 00:14:07.661 - enabled no-holes (-O no-holes) 00:14:07.661 - enabled free-space-tree (-R free-space-tree) 00:14:07.661 00:14:07.661 Label: (null) 00:14:07.661 UUID: 81a5574a-6430-4ae5-af70-1289411f6235 00:14:07.661 Node size: 16384 00:14:07.661 Sector size: 4096 00:14:07.661 Filesystem size: 510.00MiB 00:14:07.661 Block group profiles: 00:14:07.661 Data: single 8.00MiB 00:14:07.661 Metadata: DUP 32.00MiB 00:14:07.661 System: DUP 8.00MiB 00:14:07.661 SSD detected: yes 00:14:07.661 Zoned device: no 00:14:07.661 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:07.661 Runtime features: free-space-tree 00:14:07.661 Checksum: crc32c 00:14:07.661 Number of devices: 1 00:14:07.661 Devices: 00:14:07.661 ID SIZE PATH 00:14:07.661 1 510.00MiB /dev/nvme0n1p1 00:14:07.661 00:14:07.661 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:14:07.661 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2469998 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:08.595 00:14:08.595 real 0m1.177s 00:14:08.595 user 0m0.032s 00:14:08.595 sys 0m0.107s 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:08.595 ************************************ 00:14:08.595 END TEST filesystem_in_capsule_btrfs 00:14:08.595 ************************************ 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.595 ************************************ 00:14:08.595 START TEST filesystem_in_capsule_xfs 00:14:08.595 ************************************ 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:14:08.595 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:08.853 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:08.853 = sectsz=512 attr=2, projid32bit=1 00:14:08.853 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:08.853 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:08.853 data = bsize=4096 blocks=130560, imaxpct=25 00:14:08.853 = sunit=0 swidth=0 blks 00:14:08.853 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:08.853 log =internal log bsize=4096 blocks=16384, version=2 00:14:08.853 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:08.853 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:09.784 Discarding blocks...Done. 00:14:09.785 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:14:09.785 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:11.681 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:11.681 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:11.681 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:11.681 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:11.681 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:11.681 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:11.681 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2469998 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:11.682 00:14:11.682 real 0m2.683s 00:14:11.682 user 0m0.027s 00:14:11.682 sys 0m0.050s 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:11.682 ************************************ 00:14:11.682 END TEST filesystem_in_capsule_xfs 00:14:11.682 ************************************ 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2469998 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2469998 ']' 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2469998 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2469998 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2469998' 00:14:11.682 killing process with pid 2469998 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2469998 00:14:11.682 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2469998 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:12.249 00:14:12.249 real 0m13.164s 00:14:12.249 user 0m50.538s 00:14:12.249 sys 0m1.824s 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:12.249 ************************************ 00:14:12.249 END TEST nvmf_filesystem_in_capsule 00:14:12.249 ************************************ 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.249 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.249 rmmod nvme_tcp 00:14:12.249 rmmod nvme_fabrics 00:14:12.249 rmmod nvme_keyring 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.249 14:08:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:14.784 00:14:14.784 real 0m33.978s 00:14:14.784 user 1m52.102s 00:14:14.784 sys 0m6.041s 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.784 ************************************ 00:14:14.784 END TEST nvmf_filesystem 00:14:14.784 ************************************ 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.784 ************************************ 00:14:14.784 START TEST nvmf_target_discovery 00:14:14.784 ************************************ 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:14.784 * Looking for test storage... 00:14:14.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.784 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.785 14:08:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.319 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:17.320 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:17.320 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:17.320 Found net devices under 0000:84:00.0: cvl_0_0 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:17.320 Found net devices under 0000:84:00.1: cvl_0_1 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:14:17.320 00:14:17.320 --- 10.0.0.2 ping statistics --- 00:14:17.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.320 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:14:17.320 00:14:17.320 --- 10.0.0.1 ping statistics --- 00:14:17.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.320 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:17.320 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2473626 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2473626 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2473626 ']' 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.320 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.320 [2024-07-26 14:08:34.066300] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:14:17.320 [2024-07-26 14:08:34.066398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.320 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.320 [2024-07-26 14:08:34.151069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.579 [2024-07-26 14:08:34.281367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.579 [2024-07-26 14:08:34.281448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.579 [2024-07-26 14:08:34.281467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.579 [2024-07-26 14:08:34.281481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.579 [2024-07-26 14:08:34.281502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.579 [2024-07-26 14:08:34.281577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.579 [2024-07-26 14:08:34.281629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.579 [2024-07-26 14:08:34.281680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.579 [2024-07-26 14:08:34.281683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.579 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.579 [2024-07-26 14:08:34.464366] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.837 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 Null1 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 [2024-07-26 14:08:34.504729] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 Null2 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 Null3 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 Null4 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.838 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:17.838 00:14:17.838 Discovery Log Number of Records 6, Generation counter 6 00:14:17.838 =====Discovery Log Entry 0====== 00:14:17.838 trtype: tcp 00:14:17.838 adrfam: ipv4 00:14:17.838 subtype: current discovery subsystem 00:14:17.838 treq: not required 00:14:17.838 portid: 0 00:14:17.838 trsvcid: 4420 00:14:17.838 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:17.838 traddr: 10.0.0.2 00:14:17.838 eflags: explicit discovery connections, duplicate discovery information 00:14:17.838 sectype: none 00:14:17.839 =====Discovery Log Entry 1====== 00:14:17.839 trtype: tcp 00:14:17.839 adrfam: ipv4 00:14:17.839 subtype: nvme subsystem 00:14:17.839 treq: not required 00:14:17.839 portid: 0 00:14:17.839 trsvcid: 4420 00:14:17.839 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:17.839 traddr: 10.0.0.2 00:14:17.839 eflags: none 00:14:17.839 sectype: none 00:14:17.839 =====Discovery Log Entry 2====== 00:14:17.839 trtype: tcp 00:14:17.839 adrfam: ipv4 00:14:17.839 subtype: nvme subsystem 00:14:17.839 treq: not required 00:14:17.839 portid: 0 00:14:17.839 trsvcid: 4420 00:14:17.839 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:17.839 traddr: 10.0.0.2 00:14:17.839 eflags: none 00:14:17.839 sectype: none 00:14:17.839 =====Discovery Log Entry 3====== 00:14:17.839 trtype: tcp 00:14:17.839 adrfam: ipv4 00:14:17.839 subtype: nvme subsystem 00:14:17.839 treq: not required 00:14:17.839 portid: 0 00:14:17.839 trsvcid: 4420 00:14:17.839 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:17.839 traddr: 10.0.0.2 00:14:17.839 eflags: none 00:14:17.839 sectype: none 00:14:17.839 =====Discovery Log Entry 4====== 00:14:17.839 trtype: tcp 00:14:17.839 adrfam: ipv4 00:14:17.839 subtype: nvme subsystem 00:14:17.839 treq: not required 00:14:17.839 portid: 0 00:14:17.839 trsvcid: 4420 00:14:17.839 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:17.839 traddr: 10.0.0.2 00:14:17.839 eflags: none 00:14:17.839 sectype: none 00:14:17.839 =====Discovery Log Entry 5====== 00:14:17.839 trtype: tcp 00:14:17.839 adrfam: ipv4 00:14:17.839 subtype: discovery subsystem referral 00:14:17.839 treq: not required 00:14:17.839 portid: 0 00:14:17.839 trsvcid: 4430 00:14:17.839 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:17.839 traddr: 10.0.0.2 00:14:17.839 eflags: none 00:14:17.839 sectype: none 00:14:17.839 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:17.839 Perform nvmf subsystem discovery via RPC 00:14:17.839 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:17.839 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.839 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.839 [ 00:14:17.839 { 00:14:17.839 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:17.839 "subtype": "Discovery", 00:14:17.839 "listen_addresses": [ 00:14:17.839 { 00:14:17.839 "trtype": "TCP", 00:14:17.839 "adrfam": "IPv4", 00:14:17.839 "traddr": "10.0.0.2", 00:14:17.839 "trsvcid": "4420" 00:14:17.839 } 00:14:17.839 ], 00:14:17.839 "allow_any_host": true, 00:14:17.839 "hosts": [] 00:14:17.839 }, 00:14:17.839 { 00:14:17.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.839 "subtype": "NVMe", 00:14:17.839 "listen_addresses": [ 00:14:17.839 { 00:14:17.839 "trtype": "TCP", 00:14:17.839 "adrfam": "IPv4", 00:14:17.839 "traddr": "10.0.0.2", 00:14:17.839 "trsvcid": "4420" 00:14:17.839 } 00:14:17.839 ], 00:14:17.839 "allow_any_host": true, 00:14:17.839 "hosts": [], 00:14:17.839 "serial_number": "SPDK00000000000001", 00:14:17.839 "model_number": "SPDK bdev Controller", 00:14:17.839 "max_namespaces": 32, 00:14:17.839 "min_cntlid": 1, 00:14:17.839 "max_cntlid": 65519, 00:14:17.839 "namespaces": [ 00:14:17.839 { 00:14:17.839 "nsid": 1, 00:14:17.839 "bdev_name": "Null1", 00:14:17.839 "name": "Null1", 00:14:17.839 "nguid": "5C5A9AEB25424F4FB9371820F1EAB199", 00:14:17.839 "uuid": "5c5a9aeb-2542-4f4f-b937-1820f1eab199" 00:14:17.839 } 00:14:17.839 ] 00:14:17.839 }, 00:14:17.839 { 00:14:17.839 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:17.839 "subtype": "NVMe", 00:14:17.839 "listen_addresses": [ 00:14:17.839 { 00:14:17.839 "trtype": "TCP", 00:14:17.839 "adrfam": "IPv4", 00:14:17.839 "traddr": "10.0.0.2", 00:14:17.839 "trsvcid": "4420" 00:14:17.839 } 00:14:17.839 ], 00:14:17.839 "allow_any_host": true, 00:14:17.839 "hosts": [], 00:14:17.839 "serial_number": "SPDK00000000000002", 00:14:17.839 "model_number": "SPDK bdev Controller", 00:14:17.839 "max_namespaces": 32, 00:14:17.839 "min_cntlid": 1, 00:14:17.839 "max_cntlid": 65519, 00:14:17.839 "namespaces": [ 00:14:17.839 { 00:14:17.839 "nsid": 1, 00:14:17.839 "bdev_name": "Null2", 00:14:17.839 "name": "Null2", 00:14:17.839 "nguid": "7CD9F950FB9B430A936FF25A91C5A938", 00:14:17.839 "uuid": "7cd9f950-fb9b-430a-936f-f25a91c5a938" 00:14:17.839 } 00:14:17.839 ] 00:14:17.839 }, 00:14:17.839 { 00:14:17.839 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:17.839 "subtype": "NVMe", 00:14:17.839 "listen_addresses": [ 00:14:17.839 { 00:14:17.839 "trtype": "TCP", 00:14:18.098 "adrfam": "IPv4", 00:14:18.098 "traddr": "10.0.0.2", 00:14:18.098 "trsvcid": "4420" 00:14:18.098 } 00:14:18.098 ], 00:14:18.098 "allow_any_host": true, 00:14:18.098 "hosts": [], 00:14:18.098 "serial_number": "SPDK00000000000003", 00:14:18.098 "model_number": "SPDK bdev Controller", 00:14:18.098 "max_namespaces": 32, 00:14:18.098 "min_cntlid": 1, 00:14:18.098 "max_cntlid": 65519, 00:14:18.098 "namespaces": [ 00:14:18.098 { 00:14:18.098 "nsid": 1, 00:14:18.098 "bdev_name": "Null3", 00:14:18.098 "name": "Null3", 00:14:18.098 "nguid": "4C35E971CE9441EBAC040BAACF1C86A5", 00:14:18.098 "uuid": "4c35e971-ce94-41eb-ac04-0baacf1c86a5" 00:14:18.098 } 00:14:18.098 ] 00:14:18.098 }, 00:14:18.098 { 00:14:18.098 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:18.098 "subtype": "NVMe", 00:14:18.098 "listen_addresses": [ 00:14:18.098 { 00:14:18.098 "trtype": "TCP", 00:14:18.098 "adrfam": "IPv4", 00:14:18.098 "traddr": "10.0.0.2", 00:14:18.098 "trsvcid": "4420" 00:14:18.098 } 00:14:18.098 ], 00:14:18.098 "allow_any_host": true, 00:14:18.098 "hosts": [], 00:14:18.098 "serial_number": "SPDK00000000000004", 00:14:18.098 "model_number": "SPDK bdev Controller", 00:14:18.098 "max_namespaces": 32, 00:14:18.098 "min_cntlid": 1, 00:14:18.098 "max_cntlid": 65519, 00:14:18.098 "namespaces": [ 00:14:18.098 { 00:14:18.098 "nsid": 1, 00:14:18.098 "bdev_name": "Null4", 00:14:18.098 "name": "Null4", 00:14:18.098 "nguid": "456EC4F90ACF4926A2D248C8A19D8596", 00:14:18.098 "uuid": "456ec4f9-0acf-4926-a2d2-48c8a19d8596" 00:14:18.098 } 00:14:18.098 ] 00:14:18.098 } 00:14:18.098 ] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:18.098 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.099 rmmod nvme_tcp 00:14:18.099 rmmod nvme_fabrics 00:14:18.099 rmmod nvme_keyring 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2473626 ']' 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2473626 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2473626 ']' 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2473626 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.099 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2473626 00:14:18.357 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:18.357 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:18.357 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2473626' 00:14:18.357 killing process with pid 2473626 00:14:18.357 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2473626 00:14:18.357 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2473626 00:14:18.616 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:18.616 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:18.616 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:18.616 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.616 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.616 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.616 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.617 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.518 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:20.518 00:14:20.518 real 0m6.267s 00:14:20.518 user 0m4.971s 00:14:20.518 sys 0m2.504s 00:14:20.518 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.518 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.518 ************************************ 00:14:20.518 END TEST nvmf_target_discovery 00:14:20.518 ************************************ 00:14:20.518 14:08:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:20.519 14:08:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:20.519 14:08:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.519 14:08:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.795 ************************************ 00:14:20.795 START TEST nvmf_referrals 00:14:20.795 ************************************ 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:20.795 * Looking for test storage... 00:14:20.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:14:20.795 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.092 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:24.093 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:24.093 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:24.093 Found net devices under 0000:84:00.0: cvl_0_0 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:24.093 Found net devices under 0000:84:00.1: cvl_0_1 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:14:24.093 00:14:24.093 --- 10.0.0.2 ping statistics --- 00:14:24.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.093 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:14:24.093 00:14:24.093 --- 10.0.0.1 ping statistics --- 00:14:24.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.093 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2475864 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2475864 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2475864 ']' 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.093 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.093 [2024-07-26 14:08:40.518051] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:14:24.094 [2024-07-26 14:08:40.518135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.094 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.094 [2024-07-26 14:08:40.604493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:24.094 [2024-07-26 14:08:40.727204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.094 [2024-07-26 14:08:40.727269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.094 [2024-07-26 14:08:40.727286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.094 [2024-07-26 14:08:40.727299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.094 [2024-07-26 14:08:40.727311] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.094 [2024-07-26 14:08:40.727398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.094 [2024-07-26 14:08:40.727462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.094 [2024-07-26 14:08:40.727491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.094 [2024-07-26 14:08:40.727494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.094 [2024-07-26 14:08:40.897300] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.094 [2024-07-26 14:08:40.909593] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.094 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:24.352 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.352 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:24.352 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:24.352 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:24.352 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:24.352 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:24.352 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:24.352 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:24.352 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:24.610 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.868 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:24.868 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:24.868 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:24.868 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:24.868 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:24.869 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.127 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:25.127 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:25.127 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:25.127 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:25.127 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:25.127 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:25.384 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.385 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:25.643 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.901 rmmod nvme_tcp 00:14:25.901 rmmod nvme_fabrics 00:14:25.901 rmmod nvme_keyring 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2475864 ']' 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2475864 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2475864 ']' 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2475864 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2475864 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2475864' 00:14:25.901 killing process with pid 2475864 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2475864 00:14:25.901 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2475864 00:14:26.160 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.160 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.160 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.160 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.160 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.160 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.160 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.160 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.692 00:14:28.692 real 0m7.583s 00:14:28.692 user 0m10.574s 00:14:28.692 sys 0m2.912s 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:28.692 ************************************ 00:14:28.692 END TEST nvmf_referrals 00:14:28.692 ************************************ 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.692 ************************************ 00:14:28.692 START TEST nvmf_connect_disconnect 00:14:28.692 ************************************ 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:28.692 * Looking for test storage... 00:14:28.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.692 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:14:28.693 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:31.228 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:31.228 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:31.228 Found net devices under 0000:84:00.0: cvl_0_0 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:31.228 Found net devices under 0000:84:00.1: cvl_0_1 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.228 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:14:31.229 00:14:31.229 --- 10.0.0.2 ping statistics --- 00:14:31.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.229 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:14:31.229 00:14:31.229 --- 10.0.0.1 ping statistics --- 00:14:31.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.229 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2478171 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2478171 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2478171 ']' 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.229 14:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.229 [2024-07-26 14:08:48.021629] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:14:31.229 [2024-07-26 14:08:48.021720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.229 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.229 [2024-07-26 14:08:48.101166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.487 [2024-07-26 14:08:48.228083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.487 [2024-07-26 14:08:48.228144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.487 [2024-07-26 14:08:48.228161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.487 [2024-07-26 14:08:48.228174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.487 [2024-07-26 14:08:48.228186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.487 [2024-07-26 14:08:48.228255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.487 [2024-07-26 14:08:48.228280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.487 [2024-07-26 14:08:48.228336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.487 [2024-07-26 14:08:48.228339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.487 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.487 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:31.487 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.487 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:31.487 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.746 [2024-07-26 14:08:48.395301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:31.746 [2024-07-26 14:08:48.458015] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:31.746 14:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:35.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.917 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:45.917 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:45.917 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.917 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:45.917 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.918 rmmod nvme_tcp 00:14:45.918 rmmod nvme_fabrics 00:14:45.918 rmmod nvme_keyring 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2478171 ']' 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2478171 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2478171 ']' 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2478171 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2478171 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2478171' 00:14:45.918 killing process with pid 2478171 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2478171 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2478171 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.918 14:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.821 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.821 00:14:47.821 real 0m19.559s 00:14:47.821 user 0m57.019s 00:14:47.821 sys 0m3.809s 00:14:47.821 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.821 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:47.821 ************************************ 00:14:47.821 END TEST nvmf_connect_disconnect 00:14:47.821 ************************************ 00:14:47.821 14:09:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:47.821 14:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:47.821 14:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.821 14:09:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.082 ************************************ 00:14:48.082 START TEST nvmf_multitarget 00:14:48.082 ************************************ 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:48.082 * Looking for test storage... 00:14:48.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:48.082 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.613 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:50.614 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:50.614 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:50.614 Found net devices under 0000:84:00.0: cvl_0_0 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:50.614 Found net devices under 0000:84:00.1: cvl_0_1 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.614 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:14:50.873 00:14:50.873 --- 10.0.0.2 ping statistics --- 00:14:50.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.873 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:14:50.873 00:14:50.873 --- 10.0.0.1 ping statistics --- 00:14:50.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.873 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2482058 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2482058 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2482058 ']' 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.873 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:50.873 [2024-07-26 14:09:07.743718] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:14:50.873 [2024-07-26 14:09:07.743887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.132 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.132 [2024-07-26 14:09:07.845475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.132 [2024-07-26 14:09:07.970610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.132 [2024-07-26 14:09:07.970671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.132 [2024-07-26 14:09:07.970688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.132 [2024-07-26 14:09:07.970701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.132 [2024-07-26 14:09:07.970713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.132 [2024-07-26 14:09:07.970803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.132 [2024-07-26 14:09:07.973449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.132 [2024-07-26 14:09:07.973487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.132 [2024-07-26 14:09:07.973491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:52.094 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:52.353 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:52.353 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:52.353 "nvmf_tgt_1" 00:14:52.353 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:52.611 "nvmf_tgt_2" 00:14:52.611 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:52.611 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:52.869 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:52.869 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:52.869 true 00:14:52.869 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:53.127 true 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.128 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.128 rmmod nvme_tcp 00:14:53.128 rmmod nvme_fabrics 00:14:53.128 rmmod nvme_keyring 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2482058 ']' 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2482058 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2482058 ']' 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2482058 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2482058 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2482058' 00:14:53.386 killing process with pid 2482058 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2482058 00:14:53.386 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2482058 00:14:53.645 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.645 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.645 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.645 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.645 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.645 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.645 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.645 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.561 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.561 00:14:55.561 real 0m7.702s 00:14:55.561 user 0m11.582s 00:14:55.561 sys 0m2.749s 00:14:55.561 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.561 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:55.561 ************************************ 00:14:55.561 END TEST nvmf_multitarget 00:14:55.561 ************************************ 00:14:55.561 14:09:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:55.561 14:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:55.561 14:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.561 14:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.821 ************************************ 00:14:55.821 START TEST nvmf_rpc 00:14:55.821 ************************************ 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:55.821 * Looking for test storage... 00:14:55.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.821 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:58.351 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:58.351 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:58.351 Found net devices under 0000:84:00.0: cvl_0_0 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:58.351 Found net devices under 0000:84:00.1: cvl_0_1 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:58.351 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.352 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.352 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.352 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.352 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:58.352 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:58.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:14:58.610 00:14:58.610 --- 10.0.0.2 ping statistics --- 00:14:58.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.610 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:14:58.610 00:14:58.610 --- 10.0.0.1 ping statistics --- 00:14:58.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.610 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2484914 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2484914 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2484914 ']' 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.610 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.610 [2024-07-26 14:09:15.385886] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:14:58.610 [2024-07-26 14:09:15.385995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.610 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.610 [2024-07-26 14:09:15.471901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.894 [2024-07-26 14:09:15.600169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.894 [2024-07-26 14:09:15.600232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.894 [2024-07-26 14:09:15.600249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.894 [2024-07-26 14:09:15.600263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.894 [2024-07-26 14:09:15.600274] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.894 [2024-07-26 14:09:15.600383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.894 [2024-07-26 14:09:15.600417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.894 [2024-07-26 14:09:15.600487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.894 [2024-07-26 14:09:15.600491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.894 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.894 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:58.894 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.894 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:58.894 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:59.157 "tick_rate": 2700000000, 00:14:59.157 "poll_groups": [ 00:14:59.157 { 00:14:59.157 "name": "nvmf_tgt_poll_group_000", 00:14:59.157 "admin_qpairs": 0, 00:14:59.157 "io_qpairs": 0, 00:14:59.157 "current_admin_qpairs": 0, 00:14:59.157 "current_io_qpairs": 0, 00:14:59.157 "pending_bdev_io": 0, 00:14:59.157 "completed_nvme_io": 0, 00:14:59.157 "transports": [] 00:14:59.157 }, 00:14:59.157 { 00:14:59.157 "name": "nvmf_tgt_poll_group_001", 00:14:59.157 "admin_qpairs": 0, 00:14:59.157 "io_qpairs": 0, 00:14:59.157 "current_admin_qpairs": 0, 00:14:59.157 "current_io_qpairs": 0, 00:14:59.157 "pending_bdev_io": 0, 00:14:59.157 "completed_nvme_io": 0, 00:14:59.157 "transports": [] 00:14:59.157 }, 00:14:59.157 { 00:14:59.157 "name": "nvmf_tgt_poll_group_002", 00:14:59.157 "admin_qpairs": 0, 00:14:59.157 "io_qpairs": 0, 00:14:59.157 "current_admin_qpairs": 0, 00:14:59.157 "current_io_qpairs": 0, 00:14:59.157 "pending_bdev_io": 0, 00:14:59.157 "completed_nvme_io": 0, 00:14:59.157 "transports": [] 00:14:59.157 }, 00:14:59.157 { 00:14:59.157 "name": "nvmf_tgt_poll_group_003", 00:14:59.157 "admin_qpairs": 0, 00:14:59.157 "io_qpairs": 0, 00:14:59.157 "current_admin_qpairs": 0, 00:14:59.157 "current_io_qpairs": 0, 00:14:59.157 "pending_bdev_io": 0, 00:14:59.157 "completed_nvme_io": 0, 00:14:59.157 "transports": [] 00:14:59.157 } 00:14:59.157 ] 00:14:59.157 }' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.157 [2024-07-26 14:09:15.895614] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:59.157 "tick_rate": 2700000000, 00:14:59.157 "poll_groups": [ 00:14:59.157 { 00:14:59.157 "name": "nvmf_tgt_poll_group_000", 00:14:59.157 "admin_qpairs": 0, 00:14:59.157 "io_qpairs": 0, 00:14:59.157 "current_admin_qpairs": 0, 00:14:59.157 "current_io_qpairs": 0, 00:14:59.157 "pending_bdev_io": 0, 00:14:59.157 "completed_nvme_io": 0, 00:14:59.157 "transports": [ 00:14:59.157 { 00:14:59.157 "trtype": "TCP" 00:14:59.157 } 00:14:59.157 ] 00:14:59.157 }, 00:14:59.157 { 00:14:59.157 "name": "nvmf_tgt_poll_group_001", 00:14:59.157 "admin_qpairs": 0, 00:14:59.157 "io_qpairs": 0, 00:14:59.157 "current_admin_qpairs": 0, 00:14:59.157 "current_io_qpairs": 0, 00:14:59.157 "pending_bdev_io": 0, 00:14:59.157 "completed_nvme_io": 0, 00:14:59.157 "transports": [ 00:14:59.157 { 00:14:59.157 "trtype": "TCP" 00:14:59.157 } 00:14:59.157 ] 00:14:59.157 }, 00:14:59.157 { 00:14:59.157 "name": "nvmf_tgt_poll_group_002", 00:14:59.157 "admin_qpairs": 0, 00:14:59.157 "io_qpairs": 0, 00:14:59.157 "current_admin_qpairs": 0, 00:14:59.157 "current_io_qpairs": 0, 00:14:59.157 "pending_bdev_io": 0, 00:14:59.157 "completed_nvme_io": 0, 00:14:59.157 "transports": [ 00:14:59.157 { 00:14:59.157 "trtype": "TCP" 00:14:59.157 } 00:14:59.157 ] 00:14:59.157 }, 00:14:59.157 { 00:14:59.157 "name": "nvmf_tgt_poll_group_003", 00:14:59.157 "admin_qpairs": 0, 00:14:59.157 "io_qpairs": 0, 00:14:59.157 "current_admin_qpairs": 0, 00:14:59.157 "current_io_qpairs": 0, 00:14:59.157 "pending_bdev_io": 0, 00:14:59.157 "completed_nvme_io": 0, 00:14:59.157 "transports": [ 00:14:59.157 { 00:14:59.157 "trtype": "TCP" 00:14:59.157 } 00:14:59.157 ] 00:14:59.157 } 00:14:59.157 ] 00:14:59.157 }' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.157 14:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.157 Malloc1 00:14:59.157 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.157 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:59.157 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.157 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.157 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.157 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.158 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.158 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.158 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.158 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:59.158 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.158 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.416 [2024-07-26 14:09:16.052549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:14:59.416 [2024-07-26 14:09:16.074907] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:14:59.416 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:59.416 could not add new controller: failed to write to nvme-fabrics device 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.416 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.981 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:59.981 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.981 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.981 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:59.982 14:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:02.509 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:02.509 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:02.509 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.509 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:02.509 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.509 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:02.509 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.510 [2024-07-26 14:09:18.914152] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:15:02.510 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:02.510 could not add new controller: failed to write to nvme-fabrics device 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.510 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.768 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.768 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:02.768 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.768 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:02.768 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.295 [2024-07-26 14:09:21.681254] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.295 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.554 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:05.554 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:05.554 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.554 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:05.554 14:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.083 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.084 [2024-07-26 14:09:24.536437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.084 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.342 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.342 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:08.342 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.342 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:08.342 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 [2024-07-26 14:09:27.316699] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:11.128 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.128 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:11.128 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.128 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:11.128 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:13.657 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:13.657 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:13.657 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.657 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:13.657 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.657 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:13.657 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.657 [2024-07-26 14:09:30.104764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.657 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.658 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.915 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.915 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:13.915 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.915 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:13.915 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.443 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.444 [2024-07-26 14:09:32.936311] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.444 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:16.702 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.702 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:16.702 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.702 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:16.702 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:19.261 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:19.261 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:19.261 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.261 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:19.261 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.261 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:19.261 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 [2024-07-26 14:09:35.731462] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 [2024-07-26 14:09:35.779511] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 [2024-07-26 14:09:35.827683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.262 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 [2024-07-26 14:09:35.875862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 [2024-07-26 14:09:35.924030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:19.263 "tick_rate": 2700000000, 00:15:19.263 "poll_groups": [ 00:15:19.263 { 00:15:19.263 "name": "nvmf_tgt_poll_group_000", 00:15:19.263 "admin_qpairs": 2, 00:15:19.263 "io_qpairs": 84, 00:15:19.263 "current_admin_qpairs": 0, 00:15:19.263 "current_io_qpairs": 0, 00:15:19.263 "pending_bdev_io": 0, 00:15:19.263 "completed_nvme_io": 183, 00:15:19.263 "transports": [ 00:15:19.263 { 00:15:19.263 "trtype": "TCP" 00:15:19.263 } 00:15:19.263 ] 00:15:19.263 }, 00:15:19.263 { 00:15:19.263 "name": "nvmf_tgt_poll_group_001", 00:15:19.263 "admin_qpairs": 2, 00:15:19.263 "io_qpairs": 84, 00:15:19.263 "current_admin_qpairs": 0, 00:15:19.263 "current_io_qpairs": 0, 00:15:19.263 "pending_bdev_io": 0, 00:15:19.263 "completed_nvme_io": 135, 00:15:19.263 "transports": [ 00:15:19.263 { 00:15:19.263 "trtype": "TCP" 00:15:19.263 } 00:15:19.263 ] 00:15:19.263 }, 00:15:19.263 { 00:15:19.263 "name": "nvmf_tgt_poll_group_002", 00:15:19.263 "admin_qpairs": 1, 00:15:19.263 "io_qpairs": 84, 00:15:19.263 "current_admin_qpairs": 0, 00:15:19.263 "current_io_qpairs": 0, 00:15:19.263 "pending_bdev_io": 0, 00:15:19.263 "completed_nvme_io": 199, 00:15:19.263 "transports": [ 00:15:19.263 { 00:15:19.263 "trtype": "TCP" 00:15:19.263 } 00:15:19.263 ] 00:15:19.263 }, 00:15:19.263 { 00:15:19.263 "name": "nvmf_tgt_poll_group_003", 00:15:19.263 "admin_qpairs": 2, 00:15:19.263 "io_qpairs": 84, 00:15:19.263 "current_admin_qpairs": 0, 00:15:19.263 "current_io_qpairs": 0, 00:15:19.263 "pending_bdev_io": 0, 00:15:19.263 "completed_nvme_io": 169, 00:15:19.263 "transports": [ 00:15:19.263 { 00:15:19.263 "trtype": "TCP" 00:15:19.263 } 00:15:19.263 ] 00:15:19.263 } 00:15:19.263 ] 00:15:19.263 }' 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:19.263 14:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.263 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.264 rmmod nvme_tcp 00:15:19.264 rmmod nvme_fabrics 00:15:19.264 rmmod nvme_keyring 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2484914 ']' 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2484914 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2484914 ']' 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2484914 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2484914 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2484914' 00:15:19.264 killing process with pid 2484914 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2484914 00:15:19.264 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2484914 00:15:19.829 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.829 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.829 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.829 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.829 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.829 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.829 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.829 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.730 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.730 00:15:21.730 real 0m26.056s 00:15:21.730 user 1m23.030s 00:15:21.730 sys 0m4.381s 00:15:21.730 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.730 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.730 ************************************ 00:15:21.730 END TEST nvmf_rpc 00:15:21.730 ************************************ 00:15:21.730 14:09:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:21.730 14:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:21.730 14:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.730 14:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.730 ************************************ 00:15:21.730 START TEST nvmf_invalid 00:15:21.730 ************************************ 00:15:21.730 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:21.989 * Looking for test storage... 00:15:21.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.989 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:24.520 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:24.520 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.520 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:24.521 Found net devices under 0000:84:00.0: cvl_0_0 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:24.521 Found net devices under 0000:84:00.1: cvl_0_1 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:24.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:15:24.521 00:15:24.521 --- 10.0.0.2 ping statistics --- 00:15:24.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.521 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:15:24.521 00:15:24.521 --- 10.0.0.1 ping statistics --- 00:15:24.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.521 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2489420 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2489420 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2489420 ']' 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.521 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:24.780 [2024-07-26 14:09:41.443945] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:15:24.780 [2024-07-26 14:09:41.444118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.780 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.780 [2024-07-26 14:09:41.558614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.037 [2024-07-26 14:09:41.684162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.037 [2024-07-26 14:09:41.684221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.037 [2024-07-26 14:09:41.684238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.037 [2024-07-26 14:09:41.684251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.037 [2024-07-26 14:09:41.684264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.037 [2024-07-26 14:09:41.684341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.037 [2024-07-26 14:09:41.684394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.037 [2024-07-26 14:09:41.684457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.037 [2024-07-26 14:09:41.684462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.603 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.603 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:25.603 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.603 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.603 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.860 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.860 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:25.860 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10584 00:15:26.135 [2024-07-26 14:09:42.777012] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:26.135 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:26.135 { 00:15:26.135 "nqn": "nqn.2016-06.io.spdk:cnode10584", 00:15:26.135 "tgt_name": "foobar", 00:15:26.135 "method": "nvmf_create_subsystem", 00:15:26.135 "req_id": 1 00:15:26.135 } 00:15:26.135 Got JSON-RPC error response 00:15:26.135 response: 00:15:26.135 { 00:15:26.135 "code": -32603, 00:15:26.135 "message": "Unable to find target foobar" 00:15:26.135 }' 00:15:26.135 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:26.135 { 00:15:26.135 "nqn": "nqn.2016-06.io.spdk:cnode10584", 00:15:26.135 "tgt_name": "foobar", 00:15:26.135 "method": "nvmf_create_subsystem", 00:15:26.135 "req_id": 1 00:15:26.135 } 00:15:26.135 Got JSON-RPC error response 00:15:26.135 response: 00:15:26.135 { 00:15:26.135 "code": -32603, 00:15:26.135 "message": "Unable to find target foobar" 00:15:26.135 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:26.136 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:26.136 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2417 00:15:26.400 [2024-07-26 14:09:43.210518] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2417: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:26.400 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:26.400 { 00:15:26.400 "nqn": "nqn.2016-06.io.spdk:cnode2417", 00:15:26.400 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:26.400 "method": "nvmf_create_subsystem", 00:15:26.400 "req_id": 1 00:15:26.400 } 00:15:26.400 Got JSON-RPC error response 00:15:26.400 response: 00:15:26.400 { 00:15:26.400 "code": -32602, 00:15:26.400 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:26.400 }' 00:15:26.400 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:26.400 { 00:15:26.400 "nqn": "nqn.2016-06.io.spdk:cnode2417", 00:15:26.400 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:26.400 "method": "nvmf_create_subsystem", 00:15:26.400 "req_id": 1 00:15:26.400 } 00:15:26.400 Got JSON-RPC error response 00:15:26.400 response: 00:15:26.400 { 00:15:26.400 "code": -32602, 00:15:26.400 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:26.400 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:26.400 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:26.400 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10964 00:15:26.658 [2024-07-26 14:09:43.543597] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10964: invalid model number 'SPDK_Controller' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:26.918 { 00:15:26.918 "nqn": "nqn.2016-06.io.spdk:cnode10964", 00:15:26.918 "model_number": "SPDK_Controller\u001f", 00:15:26.918 "method": "nvmf_create_subsystem", 00:15:26.918 "req_id": 1 00:15:26.918 } 00:15:26.918 Got JSON-RPC error response 00:15:26.918 response: 00:15:26.918 { 00:15:26.918 "code": -32602, 00:15:26.918 "message": "Invalid MN SPDK_Controller\u001f" 00:15:26.918 }' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:26.918 { 00:15:26.918 "nqn": "nqn.2016-06.io.spdk:cnode10964", 00:15:26.918 "model_number": "SPDK_Controller\u001f", 00:15:26.918 "method": "nvmf_create_subsystem", 00:15:26.918 "req_id": 1 00:15:26.918 } 00:15:26.918 Got JSON-RPC error response 00:15:26.918 response: 00:15:26.918 { 00:15:26.918 "code": -32602, 00:15:26.918 "message": "Invalid MN SPDK_Controller\u001f" 00:15:26.918 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.918 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:15:26.919 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n`cxGC /dev/null' 00:15:31.817 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:34.370 00:15:34.370 real 0m12.113s 00:15:34.370 user 0m33.550s 00:15:34.370 sys 0m3.243s 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:34.370 ************************************ 00:15:34.370 END TEST nvmf_invalid 00:15:34.370 ************************************ 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.370 ************************************ 00:15:34.370 START TEST nvmf_connect_stress 00:15:34.370 ************************************ 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:34.370 * Looking for test storage... 00:15:34.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.370 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.371 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:36.906 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:36.906 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:36.906 Found net devices under 0000:84:00.0: cvl_0_0 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:36.906 Found net devices under 0000:84:00.1: cvl_0_1 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.906 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:15:36.907 00:15:36.907 --- 10.0.0.2 ping statistics --- 00:15:36.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.907 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:15:36.907 00:15:36.907 --- 10.0.0.1 ping statistics --- 00:15:36.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.907 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2492460 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2492460 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2492460 ']' 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.907 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.907 [2024-07-26 14:09:53.666521] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:15:36.907 [2024-07-26 14:09:53.666615] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.907 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.907 [2024-07-26 14:09:53.753947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:37.166 [2024-07-26 14:09:53.894249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.166 [2024-07-26 14:09:53.894320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.166 [2024-07-26 14:09:53.894340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.166 [2024-07-26 14:09:53.894356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.166 [2024-07-26 14:09:53.894370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.166 [2024-07-26 14:09:53.894466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.166 [2024-07-26 14:09:53.894529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.166 [2024-07-26 14:09:53.894534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.166 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.166 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:37.166 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.166 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:37.166 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.424 [2024-07-26 14:09:54.065531] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.424 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.425 [2024-07-26 14:09:54.098160] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.425 NULL1 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2492486 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.425 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.684 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.684 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:37.684 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.684 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.684 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.942 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.942 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:37.942 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.942 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.942 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.507 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.507 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:38.507 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.507 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.507 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.765 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.765 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:38.765 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.765 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.765 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.023 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.023 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:39.023 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.023 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.023 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.281 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.281 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:39.281 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.281 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.281 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.539 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.539 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:39.539 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.539 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.539 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.105 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.105 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:40.105 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.105 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.105 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.363 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.363 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:40.363 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.363 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.363 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.621 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.621 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:40.621 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.621 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.621 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.879 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.879 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:40.879 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.879 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.879 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.137 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.137 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:41.137 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.137 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.137 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.704 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:41.704 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.704 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.704 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.962 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.962 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:41.962 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.962 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.962 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.220 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.220 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:42.220 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.220 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.220 14:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.478 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.478 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:42.478 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.478 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.478 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.043 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.043 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:43.043 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.043 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.044 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.301 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.301 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:43.301 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.301 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.301 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.559 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.559 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:43.559 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.559 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.559 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.817 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.817 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:43.817 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.817 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.817 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.075 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.075 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:44.075 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.075 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.075 14:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.640 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.640 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:44.640 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.640 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.640 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.898 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.898 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:44.898 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.898 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.898 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.156 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.156 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:45.156 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.156 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.156 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.413 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.413 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:45.413 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.413 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.413 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.671 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.671 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:45.671 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.671 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.671 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.237 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.237 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:46.237 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.237 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.237 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.495 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.495 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:46.495 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.495 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.495 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.752 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.752 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:46.753 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.753 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.753 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.010 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.010 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:47.010 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.010 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.010 14:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.268 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.268 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:47.268 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.268 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.268 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.526 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2492486 00:15:47.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2492486) - No such process 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2492486 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.786 rmmod nvme_tcp 00:15:47.786 rmmod nvme_fabrics 00:15:47.786 rmmod nvme_keyring 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2492460 ']' 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2492460 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2492460 ']' 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2492460 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2492460 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2492460' 00:15:47.786 killing process with pid 2492460 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2492460 00:15:47.786 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2492460 00:15:48.066 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.066 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.066 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.066 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.066 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.066 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.066 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.066 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.614 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.614 00:15:50.614 real 0m16.163s 00:15:50.614 user 0m39.015s 00:15:50.614 sys 0m6.479s 00:15:50.614 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.614 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.614 ************************************ 00:15:50.614 END TEST nvmf_connect_stress 00:15:50.614 ************************************ 00:15:50.614 14:10:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:50.614 14:10:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:50.614 14:10:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.614 14:10:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.614 ************************************ 00:15:50.614 START TEST nvmf_fused_ordering 00:15:50.614 ************************************ 00:15:50.614 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:50.614 * Looking for test storage... 00:15:50.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.614 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.615 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.615 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.615 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.615 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.615 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:53.149 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:53.150 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:53.150 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:53.150 Found net devices under 0000:84:00.0: cvl_0_0 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:53.150 Found net devices under 0000:84:00.1: cvl_0_1 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:53.150 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:53.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:15:53.151 00:15:53.151 --- 10.0.0.2 ping statistics --- 00:15:53.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.151 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:53.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:15:53.151 00:15:53.151 --- 10.0.0.1 ping statistics --- 00:15:53.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.151 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2495773 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2495773 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2495773 ']' 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.151 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.151 [2024-07-26 14:10:09.990156] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:15:53.151 [2024-07-26 14:10:09.990332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.409 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.409 [2024-07-26 14:10:10.103479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.409 [2024-07-26 14:10:10.241959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.409 [2024-07-26 14:10:10.242029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.409 [2024-07-26 14:10:10.242049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.409 [2024-07-26 14:10:10.242066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.409 [2024-07-26 14:10:10.242081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.409 [2024-07-26 14:10:10.242119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.666 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.666 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:53.666 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.666 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:53.666 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.666 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.666 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 [2024-07-26 14:10:10.414507] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 [2024-07-26 14:10:10.430772] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 NULL1 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.692 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:53.692 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.692 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:53.692 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.692 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:53.692 [2024-07-26 14:10:10.476534] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:15:53.692 [2024-07-26 14:10:10.476577] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495794 ] 00:15:53.692 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.256 Attached to nqn.2016-06.io.spdk:cnode1 00:15:54.256 Namespace ID: 1 size: 1GB 00:15:54.256 fused_ordering(0) 00:15:54.256 fused_ordering(1) 00:15:54.256 fused_ordering(2) 00:15:54.256 fused_ordering(3) 00:15:54.256 fused_ordering(4) 00:15:54.256 fused_ordering(5) 00:15:54.256 fused_ordering(6) 00:15:54.256 fused_ordering(7) 00:15:54.256 fused_ordering(8) 00:15:54.256 fused_ordering(9) 00:15:54.256 fused_ordering(10) 00:15:54.256 fused_ordering(11) 00:15:54.256 fused_ordering(12) 00:15:54.256 fused_ordering(13) 00:15:54.256 fused_ordering(14) 00:15:54.256 fused_ordering(15) 00:15:54.256 fused_ordering(16) 00:15:54.256 fused_ordering(17) 00:15:54.256 fused_ordering(18) 00:15:54.256 fused_ordering(19) 00:15:54.256 fused_ordering(20) 00:15:54.256 fused_ordering(21) 00:15:54.256 fused_ordering(22) 00:15:54.256 fused_ordering(23) 00:15:54.256 fused_ordering(24) 00:15:54.256 fused_ordering(25) 00:15:54.256 fused_ordering(26) 00:15:54.256 fused_ordering(27) 00:15:54.256 fused_ordering(28) 00:15:54.256 fused_ordering(29) 00:15:54.256 fused_ordering(30) 00:15:54.256 fused_ordering(31) 00:15:54.256 fused_ordering(32) 00:15:54.256 fused_ordering(33) 00:15:54.256 fused_ordering(34) 00:15:54.256 fused_ordering(35) 00:15:54.256 fused_ordering(36) 00:15:54.256 fused_ordering(37) 00:15:54.256 fused_ordering(38) 00:15:54.256 fused_ordering(39) 00:15:54.256 fused_ordering(40) 00:15:54.256 fused_ordering(41) 00:15:54.256 fused_ordering(42) 00:15:54.256 fused_ordering(43) 00:15:54.256 fused_ordering(44) 00:15:54.256 fused_ordering(45) 00:15:54.256 fused_ordering(46) 00:15:54.256 fused_ordering(47) 00:15:54.256 fused_ordering(48) 00:15:54.256 fused_ordering(49) 00:15:54.256 fused_ordering(50) 00:15:54.256 fused_ordering(51) 00:15:54.256 fused_ordering(52) 00:15:54.256 fused_ordering(53) 00:15:54.256 fused_ordering(54) 00:15:54.256 fused_ordering(55) 00:15:54.256 fused_ordering(56) 00:15:54.256 fused_ordering(57) 00:15:54.256 fused_ordering(58) 00:15:54.256 fused_ordering(59) 00:15:54.256 fused_ordering(60) 00:15:54.256 fused_ordering(61) 00:15:54.256 fused_ordering(62) 00:15:54.256 fused_ordering(63) 00:15:54.256 fused_ordering(64) 00:15:54.257 fused_ordering(65) 00:15:54.257 fused_ordering(66) 00:15:54.257 fused_ordering(67) 00:15:54.257 fused_ordering(68) 00:15:54.257 fused_ordering(69) 00:15:54.257 fused_ordering(70) 00:15:54.257 fused_ordering(71) 00:15:54.257 fused_ordering(72) 00:15:54.257 fused_ordering(73) 00:15:54.257 fused_ordering(74) 00:15:54.257 fused_ordering(75) 00:15:54.257 fused_ordering(76) 00:15:54.257 fused_ordering(77) 00:15:54.257 fused_ordering(78) 00:15:54.257 fused_ordering(79) 00:15:54.257 fused_ordering(80) 00:15:54.257 fused_ordering(81) 00:15:54.257 fused_ordering(82) 00:15:54.257 fused_ordering(83) 00:15:54.257 fused_ordering(84) 00:15:54.257 fused_ordering(85) 00:15:54.257 fused_ordering(86) 00:15:54.257 fused_ordering(87) 00:15:54.257 fused_ordering(88) 00:15:54.257 fused_ordering(89) 00:15:54.257 fused_ordering(90) 00:15:54.257 fused_ordering(91) 00:15:54.257 fused_ordering(92) 00:15:54.257 fused_ordering(93) 00:15:54.257 fused_ordering(94) 00:15:54.257 fused_ordering(95) 00:15:54.257 fused_ordering(96) 00:15:54.257 fused_ordering(97) 00:15:54.257 fused_ordering(98) 00:15:54.257 fused_ordering(99) 00:15:54.257 fused_ordering(100) 00:15:54.257 fused_ordering(101) 00:15:54.257 fused_ordering(102) 00:15:54.257 fused_ordering(103) 00:15:54.257 fused_ordering(104) 00:15:54.257 fused_ordering(105) 00:15:54.257 fused_ordering(106) 00:15:54.257 fused_ordering(107) 00:15:54.257 fused_ordering(108) 00:15:54.257 fused_ordering(109) 00:15:54.257 fused_ordering(110) 00:15:54.257 fused_ordering(111) 00:15:54.257 fused_ordering(112) 00:15:54.257 fused_ordering(113) 00:15:54.257 fused_ordering(114) 00:15:54.257 fused_ordering(115) 00:15:54.257 fused_ordering(116) 00:15:54.257 fused_ordering(117) 00:15:54.257 fused_ordering(118) 00:15:54.257 fused_ordering(119) 00:15:54.257 fused_ordering(120) 00:15:54.257 fused_ordering(121) 00:15:54.257 fused_ordering(122) 00:15:54.257 fused_ordering(123) 00:15:54.257 fused_ordering(124) 00:15:54.257 fused_ordering(125) 00:15:54.257 fused_ordering(126) 00:15:54.257 fused_ordering(127) 00:15:54.257 fused_ordering(128) 00:15:54.257 fused_ordering(129) 00:15:54.257 fused_ordering(130) 00:15:54.257 fused_ordering(131) 00:15:54.257 fused_ordering(132) 00:15:54.257 fused_ordering(133) 00:15:54.257 fused_ordering(134) 00:15:54.257 fused_ordering(135) 00:15:54.257 fused_ordering(136) 00:15:54.257 fused_ordering(137) 00:15:54.257 fused_ordering(138) 00:15:54.257 fused_ordering(139) 00:15:54.257 fused_ordering(140) 00:15:54.257 fused_ordering(141) 00:15:54.257 fused_ordering(142) 00:15:54.257 fused_ordering(143) 00:15:54.257 fused_ordering(144) 00:15:54.257 fused_ordering(145) 00:15:54.257 fused_ordering(146) 00:15:54.257 fused_ordering(147) 00:15:54.257 fused_ordering(148) 00:15:54.257 fused_ordering(149) 00:15:54.257 fused_ordering(150) 00:15:54.257 fused_ordering(151) 00:15:54.257 fused_ordering(152) 00:15:54.257 fused_ordering(153) 00:15:54.257 fused_ordering(154) 00:15:54.257 fused_ordering(155) 00:15:54.257 fused_ordering(156) 00:15:54.257 fused_ordering(157) 00:15:54.257 fused_ordering(158) 00:15:54.257 fused_ordering(159) 00:15:54.257 fused_ordering(160) 00:15:54.257 fused_ordering(161) 00:15:54.257 fused_ordering(162) 00:15:54.257 fused_ordering(163) 00:15:54.257 fused_ordering(164) 00:15:54.257 fused_ordering(165) 00:15:54.257 fused_ordering(166) 00:15:54.257 fused_ordering(167) 00:15:54.257 fused_ordering(168) 00:15:54.257 fused_ordering(169) 00:15:54.257 fused_ordering(170) 00:15:54.257 fused_ordering(171) 00:15:54.257 fused_ordering(172) 00:15:54.257 fused_ordering(173) 00:15:54.257 fused_ordering(174) 00:15:54.257 fused_ordering(175) 00:15:54.257 fused_ordering(176) 00:15:54.257 fused_ordering(177) 00:15:54.257 fused_ordering(178) 00:15:54.257 fused_ordering(179) 00:15:54.257 fused_ordering(180) 00:15:54.257 fused_ordering(181) 00:15:54.257 fused_ordering(182) 00:15:54.257 fused_ordering(183) 00:15:54.257 fused_ordering(184) 00:15:54.257 fused_ordering(185) 00:15:54.257 fused_ordering(186) 00:15:54.257 fused_ordering(187) 00:15:54.257 fused_ordering(188) 00:15:54.257 fused_ordering(189) 00:15:54.257 fused_ordering(190) 00:15:54.257 fused_ordering(191) 00:15:54.257 fused_ordering(192) 00:15:54.257 fused_ordering(193) 00:15:54.257 fused_ordering(194) 00:15:54.257 fused_ordering(195) 00:15:54.257 fused_ordering(196) 00:15:54.257 fused_ordering(197) 00:15:54.257 fused_ordering(198) 00:15:54.257 fused_ordering(199) 00:15:54.257 fused_ordering(200) 00:15:54.257 fused_ordering(201) 00:15:54.257 fused_ordering(202) 00:15:54.257 fused_ordering(203) 00:15:54.257 fused_ordering(204) 00:15:54.257 fused_ordering(205) 00:15:54.515 fused_ordering(206) 00:15:54.515 fused_ordering(207) 00:15:54.515 fused_ordering(208) 00:15:54.515 fused_ordering(209) 00:15:54.515 fused_ordering(210) 00:15:54.515 fused_ordering(211) 00:15:54.515 fused_ordering(212) 00:15:54.515 fused_ordering(213) 00:15:54.515 fused_ordering(214) 00:15:54.515 fused_ordering(215) 00:15:54.515 fused_ordering(216) 00:15:54.515 fused_ordering(217) 00:15:54.515 fused_ordering(218) 00:15:54.515 fused_ordering(219) 00:15:54.515 fused_ordering(220) 00:15:54.515 fused_ordering(221) 00:15:54.515 fused_ordering(222) 00:15:54.515 fused_ordering(223) 00:15:54.515 fused_ordering(224) 00:15:54.515 fused_ordering(225) 00:15:54.515 fused_ordering(226) 00:15:54.515 fused_ordering(227) 00:15:54.515 fused_ordering(228) 00:15:54.515 fused_ordering(229) 00:15:54.515 fused_ordering(230) 00:15:54.515 fused_ordering(231) 00:15:54.515 fused_ordering(232) 00:15:54.515 fused_ordering(233) 00:15:54.515 fused_ordering(234) 00:15:54.515 fused_ordering(235) 00:15:54.515 fused_ordering(236) 00:15:54.515 fused_ordering(237) 00:15:54.515 fused_ordering(238) 00:15:54.515 fused_ordering(239) 00:15:54.515 fused_ordering(240) 00:15:54.515 fused_ordering(241) 00:15:54.515 fused_ordering(242) 00:15:54.515 fused_ordering(243) 00:15:54.515 fused_ordering(244) 00:15:54.515 fused_ordering(245) 00:15:54.515 fused_ordering(246) 00:15:54.515 fused_ordering(247) 00:15:54.515 fused_ordering(248) 00:15:54.515 fused_ordering(249) 00:15:54.515 fused_ordering(250) 00:15:54.515 fused_ordering(251) 00:15:54.515 fused_ordering(252) 00:15:54.515 fused_ordering(253) 00:15:54.515 fused_ordering(254) 00:15:54.515 fused_ordering(255) 00:15:54.515 fused_ordering(256) 00:15:54.515 fused_ordering(257) 00:15:54.515 fused_ordering(258) 00:15:54.515 fused_ordering(259) 00:15:54.515 fused_ordering(260) 00:15:54.515 fused_ordering(261) 00:15:54.515 fused_ordering(262) 00:15:54.515 fused_ordering(263) 00:15:54.515 fused_ordering(264) 00:15:54.515 fused_ordering(265) 00:15:54.515 fused_ordering(266) 00:15:54.515 fused_ordering(267) 00:15:54.515 fused_ordering(268) 00:15:54.515 fused_ordering(269) 00:15:54.515 fused_ordering(270) 00:15:54.515 fused_ordering(271) 00:15:54.515 fused_ordering(272) 00:15:54.515 fused_ordering(273) 00:15:54.515 fused_ordering(274) 00:15:54.515 fused_ordering(275) 00:15:54.515 fused_ordering(276) 00:15:54.515 fused_ordering(277) 00:15:54.515 fused_ordering(278) 00:15:54.515 fused_ordering(279) 00:15:54.515 fused_ordering(280) 00:15:54.515 fused_ordering(281) 00:15:54.515 fused_ordering(282) 00:15:54.515 fused_ordering(283) 00:15:54.515 fused_ordering(284) 00:15:54.515 fused_ordering(285) 00:15:54.515 fused_ordering(286) 00:15:54.515 fused_ordering(287) 00:15:54.515 fused_ordering(288) 00:15:54.515 fused_ordering(289) 00:15:54.515 fused_ordering(290) 00:15:54.515 fused_ordering(291) 00:15:54.515 fused_ordering(292) 00:15:54.515 fused_ordering(293) 00:15:54.515 fused_ordering(294) 00:15:54.515 fused_ordering(295) 00:15:54.515 fused_ordering(296) 00:15:54.515 fused_ordering(297) 00:15:54.515 fused_ordering(298) 00:15:54.515 fused_ordering(299) 00:15:54.515 fused_ordering(300) 00:15:54.515 fused_ordering(301) 00:15:54.515 fused_ordering(302) 00:15:54.515 fused_ordering(303) 00:15:54.515 fused_ordering(304) 00:15:54.515 fused_ordering(305) 00:15:54.515 fused_ordering(306) 00:15:54.515 fused_ordering(307) 00:15:54.515 fused_ordering(308) 00:15:54.515 fused_ordering(309) 00:15:54.515 fused_ordering(310) 00:15:54.515 fused_ordering(311) 00:15:54.515 fused_ordering(312) 00:15:54.515 fused_ordering(313) 00:15:54.515 fused_ordering(314) 00:15:54.515 fused_ordering(315) 00:15:54.515 fused_ordering(316) 00:15:54.515 fused_ordering(317) 00:15:54.515 fused_ordering(318) 00:15:54.515 fused_ordering(319) 00:15:54.515 fused_ordering(320) 00:15:54.515 fused_ordering(321) 00:15:54.515 fused_ordering(322) 00:15:54.515 fused_ordering(323) 00:15:54.515 fused_ordering(324) 00:15:54.515 fused_ordering(325) 00:15:54.515 fused_ordering(326) 00:15:54.515 fused_ordering(327) 00:15:54.515 fused_ordering(328) 00:15:54.515 fused_ordering(329) 00:15:54.515 fused_ordering(330) 00:15:54.515 fused_ordering(331) 00:15:54.515 fused_ordering(332) 00:15:54.515 fused_ordering(333) 00:15:54.515 fused_ordering(334) 00:15:54.515 fused_ordering(335) 00:15:54.515 fused_ordering(336) 00:15:54.515 fused_ordering(337) 00:15:54.515 fused_ordering(338) 00:15:54.515 fused_ordering(339) 00:15:54.515 fused_ordering(340) 00:15:54.515 fused_ordering(341) 00:15:54.515 fused_ordering(342) 00:15:54.515 fused_ordering(343) 00:15:54.515 fused_ordering(344) 00:15:54.515 fused_ordering(345) 00:15:54.515 fused_ordering(346) 00:15:54.515 fused_ordering(347) 00:15:54.515 fused_ordering(348) 00:15:54.515 fused_ordering(349) 00:15:54.515 fused_ordering(350) 00:15:54.515 fused_ordering(351) 00:15:54.515 fused_ordering(352) 00:15:54.515 fused_ordering(353) 00:15:54.515 fused_ordering(354) 00:15:54.515 fused_ordering(355) 00:15:54.515 fused_ordering(356) 00:15:54.515 fused_ordering(357) 00:15:54.515 fused_ordering(358) 00:15:54.515 fused_ordering(359) 00:15:54.515 fused_ordering(360) 00:15:54.515 fused_ordering(361) 00:15:54.515 fused_ordering(362) 00:15:54.515 fused_ordering(363) 00:15:54.515 fused_ordering(364) 00:15:54.515 fused_ordering(365) 00:15:54.515 fused_ordering(366) 00:15:54.515 fused_ordering(367) 00:15:54.515 fused_ordering(368) 00:15:54.515 fused_ordering(369) 00:15:54.515 fused_ordering(370) 00:15:54.515 fused_ordering(371) 00:15:54.515 fused_ordering(372) 00:15:54.515 fused_ordering(373) 00:15:54.515 fused_ordering(374) 00:15:54.515 fused_ordering(375) 00:15:54.515 fused_ordering(376) 00:15:54.515 fused_ordering(377) 00:15:54.515 fused_ordering(378) 00:15:54.515 fused_ordering(379) 00:15:54.516 fused_ordering(380) 00:15:54.516 fused_ordering(381) 00:15:54.516 fused_ordering(382) 00:15:54.516 fused_ordering(383) 00:15:54.516 fused_ordering(384) 00:15:54.516 fused_ordering(385) 00:15:54.516 fused_ordering(386) 00:15:54.516 fused_ordering(387) 00:15:54.516 fused_ordering(388) 00:15:54.516 fused_ordering(389) 00:15:54.516 fused_ordering(390) 00:15:54.516 fused_ordering(391) 00:15:54.516 fused_ordering(392) 00:15:54.516 fused_ordering(393) 00:15:54.516 fused_ordering(394) 00:15:54.516 fused_ordering(395) 00:15:54.516 fused_ordering(396) 00:15:54.516 fused_ordering(397) 00:15:54.516 fused_ordering(398) 00:15:54.516 fused_ordering(399) 00:15:54.516 fused_ordering(400) 00:15:54.516 fused_ordering(401) 00:15:54.516 fused_ordering(402) 00:15:54.516 fused_ordering(403) 00:15:54.516 fused_ordering(404) 00:15:54.516 fused_ordering(405) 00:15:54.516 fused_ordering(406) 00:15:54.516 fused_ordering(407) 00:15:54.516 fused_ordering(408) 00:15:54.516 fused_ordering(409) 00:15:54.516 fused_ordering(410) 00:15:55.080 fused_ordering(411) 00:15:55.080 fused_ordering(412) 00:15:55.080 fused_ordering(413) 00:15:55.080 fused_ordering(414) 00:15:55.080 fused_ordering(415) 00:15:55.080 fused_ordering(416) 00:15:55.081 fused_ordering(417) 00:15:55.081 fused_ordering(418) 00:15:55.081 fused_ordering(419) 00:15:55.081 fused_ordering(420) 00:15:55.081 fused_ordering(421) 00:15:55.081 fused_ordering(422) 00:15:55.081 fused_ordering(423) 00:15:55.081 fused_ordering(424) 00:15:55.081 fused_ordering(425) 00:15:55.081 fused_ordering(426) 00:15:55.081 fused_ordering(427) 00:15:55.081 fused_ordering(428) 00:15:55.081 fused_ordering(429) 00:15:55.081 fused_ordering(430) 00:15:55.081 fused_ordering(431) 00:15:55.081 fused_ordering(432) 00:15:55.081 fused_ordering(433) 00:15:55.081 fused_ordering(434) 00:15:55.081 fused_ordering(435) 00:15:55.081 fused_ordering(436) 00:15:55.081 fused_ordering(437) 00:15:55.081 fused_ordering(438) 00:15:55.081 fused_ordering(439) 00:15:55.081 fused_ordering(440) 00:15:55.081 fused_ordering(441) 00:15:55.081 fused_ordering(442) 00:15:55.081 fused_ordering(443) 00:15:55.081 fused_ordering(444) 00:15:55.081 fused_ordering(445) 00:15:55.081 fused_ordering(446) 00:15:55.081 fused_ordering(447) 00:15:55.081 fused_ordering(448) 00:15:55.081 fused_ordering(449) 00:15:55.081 fused_ordering(450) 00:15:55.081 fused_ordering(451) 00:15:55.081 fused_ordering(452) 00:15:55.081 fused_ordering(453) 00:15:55.081 fused_ordering(454) 00:15:55.081 fused_ordering(455) 00:15:55.081 fused_ordering(456) 00:15:55.081 fused_ordering(457) 00:15:55.081 fused_ordering(458) 00:15:55.081 fused_ordering(459) 00:15:55.081 fused_ordering(460) 00:15:55.081 fused_ordering(461) 00:15:55.081 fused_ordering(462) 00:15:55.081 fused_ordering(463) 00:15:55.081 fused_ordering(464) 00:15:55.081 fused_ordering(465) 00:15:55.081 fused_ordering(466) 00:15:55.081 fused_ordering(467) 00:15:55.081 fused_ordering(468) 00:15:55.081 fused_ordering(469) 00:15:55.081 fused_ordering(470) 00:15:55.081 fused_ordering(471) 00:15:55.081 fused_ordering(472) 00:15:55.081 fused_ordering(473) 00:15:55.081 fused_ordering(474) 00:15:55.081 fused_ordering(475) 00:15:55.081 fused_ordering(476) 00:15:55.081 fused_ordering(477) 00:15:55.081 fused_ordering(478) 00:15:55.081 fused_ordering(479) 00:15:55.081 fused_ordering(480) 00:15:55.081 fused_ordering(481) 00:15:55.081 fused_ordering(482) 00:15:55.081 fused_ordering(483) 00:15:55.081 fused_ordering(484) 00:15:55.081 fused_ordering(485) 00:15:55.081 fused_ordering(486) 00:15:55.081 fused_ordering(487) 00:15:55.081 fused_ordering(488) 00:15:55.081 fused_ordering(489) 00:15:55.081 fused_ordering(490) 00:15:55.081 fused_ordering(491) 00:15:55.081 fused_ordering(492) 00:15:55.081 fused_ordering(493) 00:15:55.081 fused_ordering(494) 00:15:55.081 fused_ordering(495) 00:15:55.081 fused_ordering(496) 00:15:55.081 fused_ordering(497) 00:15:55.081 fused_ordering(498) 00:15:55.081 fused_ordering(499) 00:15:55.081 fused_ordering(500) 00:15:55.081 fused_ordering(501) 00:15:55.081 fused_ordering(502) 00:15:55.081 fused_ordering(503) 00:15:55.081 fused_ordering(504) 00:15:55.081 fused_ordering(505) 00:15:55.081 fused_ordering(506) 00:15:55.081 fused_ordering(507) 00:15:55.081 fused_ordering(508) 00:15:55.081 fused_ordering(509) 00:15:55.081 fused_ordering(510) 00:15:55.081 fused_ordering(511) 00:15:55.081 fused_ordering(512) 00:15:55.081 fused_ordering(513) 00:15:55.081 fused_ordering(514) 00:15:55.081 fused_ordering(515) 00:15:55.081 fused_ordering(516) 00:15:55.081 fused_ordering(517) 00:15:55.081 fused_ordering(518) 00:15:55.081 fused_ordering(519) 00:15:55.081 fused_ordering(520) 00:15:55.081 fused_ordering(521) 00:15:55.081 fused_ordering(522) 00:15:55.081 fused_ordering(523) 00:15:55.081 fused_ordering(524) 00:15:55.081 fused_ordering(525) 00:15:55.081 fused_ordering(526) 00:15:55.081 fused_ordering(527) 00:15:55.081 fused_ordering(528) 00:15:55.081 fused_ordering(529) 00:15:55.081 fused_ordering(530) 00:15:55.081 fused_ordering(531) 00:15:55.081 fused_ordering(532) 00:15:55.081 fused_ordering(533) 00:15:55.081 fused_ordering(534) 00:15:55.081 fused_ordering(535) 00:15:55.081 fused_ordering(536) 00:15:55.081 fused_ordering(537) 00:15:55.081 fused_ordering(538) 00:15:55.081 fused_ordering(539) 00:15:55.081 fused_ordering(540) 00:15:55.081 fused_ordering(541) 00:15:55.081 fused_ordering(542) 00:15:55.081 fused_ordering(543) 00:15:55.081 fused_ordering(544) 00:15:55.081 fused_ordering(545) 00:15:55.081 fused_ordering(546) 00:15:55.081 fused_ordering(547) 00:15:55.081 fused_ordering(548) 00:15:55.081 fused_ordering(549) 00:15:55.081 fused_ordering(550) 00:15:55.081 fused_ordering(551) 00:15:55.081 fused_ordering(552) 00:15:55.081 fused_ordering(553) 00:15:55.081 fused_ordering(554) 00:15:55.081 fused_ordering(555) 00:15:55.081 fused_ordering(556) 00:15:55.081 fused_ordering(557) 00:15:55.081 fused_ordering(558) 00:15:55.081 fused_ordering(559) 00:15:55.081 fused_ordering(560) 00:15:55.081 fused_ordering(561) 00:15:55.081 fused_ordering(562) 00:15:55.081 fused_ordering(563) 00:15:55.081 fused_ordering(564) 00:15:55.081 fused_ordering(565) 00:15:55.081 fused_ordering(566) 00:15:55.081 fused_ordering(567) 00:15:55.081 fused_ordering(568) 00:15:55.081 fused_ordering(569) 00:15:55.081 fused_ordering(570) 00:15:55.081 fused_ordering(571) 00:15:55.081 fused_ordering(572) 00:15:55.081 fused_ordering(573) 00:15:55.081 fused_ordering(574) 00:15:55.081 fused_ordering(575) 00:15:55.081 fused_ordering(576) 00:15:55.081 fused_ordering(577) 00:15:55.081 fused_ordering(578) 00:15:55.081 fused_ordering(579) 00:15:55.081 fused_ordering(580) 00:15:55.081 fused_ordering(581) 00:15:55.081 fused_ordering(582) 00:15:55.081 fused_ordering(583) 00:15:55.081 fused_ordering(584) 00:15:55.081 fused_ordering(585) 00:15:55.081 fused_ordering(586) 00:15:55.081 fused_ordering(587) 00:15:55.081 fused_ordering(588) 00:15:55.081 fused_ordering(589) 00:15:55.081 fused_ordering(590) 00:15:55.081 fused_ordering(591) 00:15:55.081 fused_ordering(592) 00:15:55.081 fused_ordering(593) 00:15:55.081 fused_ordering(594) 00:15:55.081 fused_ordering(595) 00:15:55.081 fused_ordering(596) 00:15:55.081 fused_ordering(597) 00:15:55.081 fused_ordering(598) 00:15:55.081 fused_ordering(599) 00:15:55.081 fused_ordering(600) 00:15:55.081 fused_ordering(601) 00:15:55.081 fused_ordering(602) 00:15:55.081 fused_ordering(603) 00:15:55.081 fused_ordering(604) 00:15:55.081 fused_ordering(605) 00:15:55.081 fused_ordering(606) 00:15:55.081 fused_ordering(607) 00:15:55.081 fused_ordering(608) 00:15:55.081 fused_ordering(609) 00:15:55.081 fused_ordering(610) 00:15:55.081 fused_ordering(611) 00:15:55.081 fused_ordering(612) 00:15:55.081 fused_ordering(613) 00:15:55.081 fused_ordering(614) 00:15:55.081 fused_ordering(615) 00:15:56.012 fused_ordering(616) 00:15:56.012 fused_ordering(617) 00:15:56.012 fused_ordering(618) 00:15:56.012 fused_ordering(619) 00:15:56.012 fused_ordering(620) 00:15:56.012 fused_ordering(621) 00:15:56.012 fused_ordering(622) 00:15:56.012 fused_ordering(623) 00:15:56.012 fused_ordering(624) 00:15:56.012 fused_ordering(625) 00:15:56.012 fused_ordering(626) 00:15:56.012 fused_ordering(627) 00:15:56.012 fused_ordering(628) 00:15:56.012 fused_ordering(629) 00:15:56.012 fused_ordering(630) 00:15:56.012 fused_ordering(631) 00:15:56.012 fused_ordering(632) 00:15:56.012 fused_ordering(633) 00:15:56.012 fused_ordering(634) 00:15:56.012 fused_ordering(635) 00:15:56.012 fused_ordering(636) 00:15:56.012 fused_ordering(637) 00:15:56.012 fused_ordering(638) 00:15:56.012 fused_ordering(639) 00:15:56.012 fused_ordering(640) 00:15:56.012 fused_ordering(641) 00:15:56.012 fused_ordering(642) 00:15:56.012 fused_ordering(643) 00:15:56.012 fused_ordering(644) 00:15:56.012 fused_ordering(645) 00:15:56.012 fused_ordering(646) 00:15:56.012 fused_ordering(647) 00:15:56.012 fused_ordering(648) 00:15:56.012 fused_ordering(649) 00:15:56.012 fused_ordering(650) 00:15:56.012 fused_ordering(651) 00:15:56.012 fused_ordering(652) 00:15:56.012 fused_ordering(653) 00:15:56.012 fused_ordering(654) 00:15:56.012 fused_ordering(655) 00:15:56.012 fused_ordering(656) 00:15:56.012 fused_ordering(657) 00:15:56.012 fused_ordering(658) 00:15:56.012 fused_ordering(659) 00:15:56.012 fused_ordering(660) 00:15:56.012 fused_ordering(661) 00:15:56.012 fused_ordering(662) 00:15:56.013 fused_ordering(663) 00:15:56.013 fused_ordering(664) 00:15:56.013 fused_ordering(665) 00:15:56.013 fused_ordering(666) 00:15:56.013 fused_ordering(667) 00:15:56.013 fused_ordering(668) 00:15:56.013 fused_ordering(669) 00:15:56.013 fused_ordering(670) 00:15:56.013 fused_ordering(671) 00:15:56.013 fused_ordering(672) 00:15:56.013 fused_ordering(673) 00:15:56.013 fused_ordering(674) 00:15:56.013 fused_ordering(675) 00:15:56.013 fused_ordering(676) 00:15:56.013 fused_ordering(677) 00:15:56.013 fused_ordering(678) 00:15:56.013 fused_ordering(679) 00:15:56.013 fused_ordering(680) 00:15:56.013 fused_ordering(681) 00:15:56.013 fused_ordering(682) 00:15:56.013 fused_ordering(683) 00:15:56.013 fused_ordering(684) 00:15:56.013 fused_ordering(685) 00:15:56.013 fused_ordering(686) 00:15:56.013 fused_ordering(687) 00:15:56.013 fused_ordering(688) 00:15:56.013 fused_ordering(689) 00:15:56.013 fused_ordering(690) 00:15:56.013 fused_ordering(691) 00:15:56.013 fused_ordering(692) 00:15:56.013 fused_ordering(693) 00:15:56.013 fused_ordering(694) 00:15:56.013 fused_ordering(695) 00:15:56.013 fused_ordering(696) 00:15:56.013 fused_ordering(697) 00:15:56.013 fused_ordering(698) 00:15:56.013 fused_ordering(699) 00:15:56.013 fused_ordering(700) 00:15:56.013 fused_ordering(701) 00:15:56.013 fused_ordering(702) 00:15:56.013 fused_ordering(703) 00:15:56.013 fused_ordering(704) 00:15:56.013 fused_ordering(705) 00:15:56.013 fused_ordering(706) 00:15:56.013 fused_ordering(707) 00:15:56.013 fused_ordering(708) 00:15:56.013 fused_ordering(709) 00:15:56.013 fused_ordering(710) 00:15:56.013 fused_ordering(711) 00:15:56.013 fused_ordering(712) 00:15:56.013 fused_ordering(713) 00:15:56.013 fused_ordering(714) 00:15:56.013 fused_ordering(715) 00:15:56.013 fused_ordering(716) 00:15:56.013 fused_ordering(717) 00:15:56.013 fused_ordering(718) 00:15:56.013 fused_ordering(719) 00:15:56.013 fused_ordering(720) 00:15:56.013 fused_ordering(721) 00:15:56.013 fused_ordering(722) 00:15:56.013 fused_ordering(723) 00:15:56.013 fused_ordering(724) 00:15:56.013 fused_ordering(725) 00:15:56.013 fused_ordering(726) 00:15:56.013 fused_ordering(727) 00:15:56.013 fused_ordering(728) 00:15:56.013 fused_ordering(729) 00:15:56.013 fused_ordering(730) 00:15:56.013 fused_ordering(731) 00:15:56.013 fused_ordering(732) 00:15:56.013 fused_ordering(733) 00:15:56.013 fused_ordering(734) 00:15:56.013 fused_ordering(735) 00:15:56.013 fused_ordering(736) 00:15:56.013 fused_ordering(737) 00:15:56.013 fused_ordering(738) 00:15:56.013 fused_ordering(739) 00:15:56.013 fused_ordering(740) 00:15:56.013 fused_ordering(741) 00:15:56.013 fused_ordering(742) 00:15:56.013 fused_ordering(743) 00:15:56.013 fused_ordering(744) 00:15:56.013 fused_ordering(745) 00:15:56.013 fused_ordering(746) 00:15:56.013 fused_ordering(747) 00:15:56.013 fused_ordering(748) 00:15:56.013 fused_ordering(749) 00:15:56.013 fused_ordering(750) 00:15:56.013 fused_ordering(751) 00:15:56.013 fused_ordering(752) 00:15:56.013 fused_ordering(753) 00:15:56.013 fused_ordering(754) 00:15:56.013 fused_ordering(755) 00:15:56.013 fused_ordering(756) 00:15:56.013 fused_ordering(757) 00:15:56.013 fused_ordering(758) 00:15:56.013 fused_ordering(759) 00:15:56.013 fused_ordering(760) 00:15:56.013 fused_ordering(761) 00:15:56.013 fused_ordering(762) 00:15:56.013 fused_ordering(763) 00:15:56.013 fused_ordering(764) 00:15:56.013 fused_ordering(765) 00:15:56.013 fused_ordering(766) 00:15:56.013 fused_ordering(767) 00:15:56.013 fused_ordering(768) 00:15:56.013 fused_ordering(769) 00:15:56.013 fused_ordering(770) 00:15:56.013 fused_ordering(771) 00:15:56.013 fused_ordering(772) 00:15:56.013 fused_ordering(773) 00:15:56.013 fused_ordering(774) 00:15:56.013 fused_ordering(775) 00:15:56.013 fused_ordering(776) 00:15:56.013 fused_ordering(777) 00:15:56.013 fused_ordering(778) 00:15:56.013 fused_ordering(779) 00:15:56.013 fused_ordering(780) 00:15:56.013 fused_ordering(781) 00:15:56.013 fused_ordering(782) 00:15:56.013 fused_ordering(783) 00:15:56.013 fused_ordering(784) 00:15:56.013 fused_ordering(785) 00:15:56.013 fused_ordering(786) 00:15:56.013 fused_ordering(787) 00:15:56.013 fused_ordering(788) 00:15:56.013 fused_ordering(789) 00:15:56.013 fused_ordering(790) 00:15:56.013 fused_ordering(791) 00:15:56.013 fused_ordering(792) 00:15:56.013 fused_ordering(793) 00:15:56.013 fused_ordering(794) 00:15:56.013 fused_ordering(795) 00:15:56.013 fused_ordering(796) 00:15:56.013 fused_ordering(797) 00:15:56.013 fused_ordering(798) 00:15:56.013 fused_ordering(799) 00:15:56.013 fused_ordering(800) 00:15:56.013 fused_ordering(801) 00:15:56.013 fused_ordering(802) 00:15:56.013 fused_ordering(803) 00:15:56.013 fused_ordering(804) 00:15:56.013 fused_ordering(805) 00:15:56.013 fused_ordering(806) 00:15:56.013 fused_ordering(807) 00:15:56.013 fused_ordering(808) 00:15:56.013 fused_ordering(809) 00:15:56.013 fused_ordering(810) 00:15:56.013 fused_ordering(811) 00:15:56.013 fused_ordering(812) 00:15:56.013 fused_ordering(813) 00:15:56.013 fused_ordering(814) 00:15:56.013 fused_ordering(815) 00:15:56.013 fused_ordering(816) 00:15:56.013 fused_ordering(817) 00:15:56.013 fused_ordering(818) 00:15:56.013 fused_ordering(819) 00:15:56.013 fused_ordering(820) 00:15:56.945 fused_ordering(821) 00:15:56.945 fused_ordering(822) 00:15:56.945 fused_ordering(823) 00:15:56.945 fused_ordering(824) 00:15:56.945 fused_ordering(825) 00:15:56.945 fused_ordering(826) 00:15:56.945 fused_ordering(827) 00:15:56.945 fused_ordering(828) 00:15:56.945 fused_ordering(829) 00:15:56.945 fused_ordering(830) 00:15:56.945 fused_ordering(831) 00:15:56.945 fused_ordering(832) 00:15:56.945 fused_ordering(833) 00:15:56.945 fused_ordering(834) 00:15:56.945 fused_ordering(835) 00:15:56.945 fused_ordering(836) 00:15:56.945 fused_ordering(837) 00:15:56.945 fused_ordering(838) 00:15:56.945 fused_ordering(839) 00:15:56.945 fused_ordering(840) 00:15:56.945 fused_ordering(841) 00:15:56.945 fused_ordering(842) 00:15:56.945 fused_ordering(843) 00:15:56.945 fused_ordering(844) 00:15:56.945 fused_ordering(845) 00:15:56.945 fused_ordering(846) 00:15:56.945 fused_ordering(847) 00:15:56.945 fused_ordering(848) 00:15:56.945 fused_ordering(849) 00:15:56.945 fused_ordering(850) 00:15:56.945 fused_ordering(851) 00:15:56.945 fused_ordering(852) 00:15:56.945 fused_ordering(853) 00:15:56.945 fused_ordering(854) 00:15:56.945 fused_ordering(855) 00:15:56.945 fused_ordering(856) 00:15:56.945 fused_ordering(857) 00:15:56.945 fused_ordering(858) 00:15:56.945 fused_ordering(859) 00:15:56.945 fused_ordering(860) 00:15:56.945 fused_ordering(861) 00:15:56.945 fused_ordering(862) 00:15:56.945 fused_ordering(863) 00:15:56.945 fused_ordering(864) 00:15:56.945 fused_ordering(865) 00:15:56.945 fused_ordering(866) 00:15:56.945 fused_ordering(867) 00:15:56.945 fused_ordering(868) 00:15:56.945 fused_ordering(869) 00:15:56.945 fused_ordering(870) 00:15:56.945 fused_ordering(871) 00:15:56.945 fused_ordering(872) 00:15:56.945 fused_ordering(873) 00:15:56.945 fused_ordering(874) 00:15:56.945 fused_ordering(875) 00:15:56.945 fused_ordering(876) 00:15:56.945 fused_ordering(877) 00:15:56.945 fused_ordering(878) 00:15:56.945 fused_ordering(879) 00:15:56.945 fused_ordering(880) 00:15:56.945 fused_ordering(881) 00:15:56.945 fused_ordering(882) 00:15:56.945 fused_ordering(883) 00:15:56.945 fused_ordering(884) 00:15:56.945 fused_ordering(885) 00:15:56.945 fused_ordering(886) 00:15:56.945 fused_ordering(887) 00:15:56.945 fused_ordering(888) 00:15:56.945 fused_ordering(889) 00:15:56.945 fused_ordering(890) 00:15:56.945 fused_ordering(891) 00:15:56.945 fused_ordering(892) 00:15:56.945 fused_ordering(893) 00:15:56.945 fused_ordering(894) 00:15:56.945 fused_ordering(895) 00:15:56.945 fused_ordering(896) 00:15:56.945 fused_ordering(897) 00:15:56.945 fused_ordering(898) 00:15:56.945 fused_ordering(899) 00:15:56.945 fused_ordering(900) 00:15:56.945 fused_ordering(901) 00:15:56.945 fused_ordering(902) 00:15:56.945 fused_ordering(903) 00:15:56.945 fused_ordering(904) 00:15:56.945 fused_ordering(905) 00:15:56.945 fused_ordering(906) 00:15:56.945 fused_ordering(907) 00:15:56.945 fused_ordering(908) 00:15:56.945 fused_ordering(909) 00:15:56.945 fused_ordering(910) 00:15:56.945 fused_ordering(911) 00:15:56.945 fused_ordering(912) 00:15:56.945 fused_ordering(913) 00:15:56.945 fused_ordering(914) 00:15:56.945 fused_ordering(915) 00:15:56.945 fused_ordering(916) 00:15:56.945 fused_ordering(917) 00:15:56.945 fused_ordering(918) 00:15:56.945 fused_ordering(919) 00:15:56.945 fused_ordering(920) 00:15:56.945 fused_ordering(921) 00:15:56.945 fused_ordering(922) 00:15:56.945 fused_ordering(923) 00:15:56.945 fused_ordering(924) 00:15:56.945 fused_ordering(925) 00:15:56.945 fused_ordering(926) 00:15:56.945 fused_ordering(927) 00:15:56.945 fused_ordering(928) 00:15:56.945 fused_ordering(929) 00:15:56.945 fused_ordering(930) 00:15:56.945 fused_ordering(931) 00:15:56.945 fused_ordering(932) 00:15:56.945 fused_ordering(933) 00:15:56.945 fused_ordering(934) 00:15:56.945 fused_ordering(935) 00:15:56.945 fused_ordering(936) 00:15:56.945 fused_ordering(937) 00:15:56.945 fused_ordering(938) 00:15:56.945 fused_ordering(939) 00:15:56.945 fused_ordering(940) 00:15:56.945 fused_ordering(941) 00:15:56.945 fused_ordering(942) 00:15:56.945 fused_ordering(943) 00:15:56.945 fused_ordering(944) 00:15:56.945 fused_ordering(945) 00:15:56.945 fused_ordering(946) 00:15:56.945 fused_ordering(947) 00:15:56.945 fused_ordering(948) 00:15:56.945 fused_ordering(949) 00:15:56.945 fused_ordering(950) 00:15:56.945 fused_ordering(951) 00:15:56.945 fused_ordering(952) 00:15:56.945 fused_ordering(953) 00:15:56.945 fused_ordering(954) 00:15:56.945 fused_ordering(955) 00:15:56.945 fused_ordering(956) 00:15:56.945 fused_ordering(957) 00:15:56.945 fused_ordering(958) 00:15:56.945 fused_ordering(959) 00:15:56.945 fused_ordering(960) 00:15:56.945 fused_ordering(961) 00:15:56.945 fused_ordering(962) 00:15:56.945 fused_ordering(963) 00:15:56.945 fused_ordering(964) 00:15:56.945 fused_ordering(965) 00:15:56.945 fused_ordering(966) 00:15:56.945 fused_ordering(967) 00:15:56.945 fused_ordering(968) 00:15:56.945 fused_ordering(969) 00:15:56.945 fused_ordering(970) 00:15:56.945 fused_ordering(971) 00:15:56.945 fused_ordering(972) 00:15:56.945 fused_ordering(973) 00:15:56.945 fused_ordering(974) 00:15:56.945 fused_ordering(975) 00:15:56.946 fused_ordering(976) 00:15:56.946 fused_ordering(977) 00:15:56.946 fused_ordering(978) 00:15:56.946 fused_ordering(979) 00:15:56.946 fused_ordering(980) 00:15:56.946 fused_ordering(981) 00:15:56.946 fused_ordering(982) 00:15:56.946 fused_ordering(983) 00:15:56.946 fused_ordering(984) 00:15:56.946 fused_ordering(985) 00:15:56.946 fused_ordering(986) 00:15:56.946 fused_ordering(987) 00:15:56.946 fused_ordering(988) 00:15:56.946 fused_ordering(989) 00:15:56.946 fused_ordering(990) 00:15:56.946 fused_ordering(991) 00:15:56.946 fused_ordering(992) 00:15:56.946 fused_ordering(993) 00:15:56.946 fused_ordering(994) 00:15:56.946 fused_ordering(995) 00:15:56.946 fused_ordering(996) 00:15:56.946 fused_ordering(997) 00:15:56.946 fused_ordering(998) 00:15:56.946 fused_ordering(999) 00:15:56.946 fused_ordering(1000) 00:15:56.946 fused_ordering(1001) 00:15:56.946 fused_ordering(1002) 00:15:56.946 fused_ordering(1003) 00:15:56.946 fused_ordering(1004) 00:15:56.946 fused_ordering(1005) 00:15:56.946 fused_ordering(1006) 00:15:56.946 fused_ordering(1007) 00:15:56.946 fused_ordering(1008) 00:15:56.946 fused_ordering(1009) 00:15:56.946 fused_ordering(1010) 00:15:56.946 fused_ordering(1011) 00:15:56.946 fused_ordering(1012) 00:15:56.946 fused_ordering(1013) 00:15:56.946 fused_ordering(1014) 00:15:56.946 fused_ordering(1015) 00:15:56.946 fused_ordering(1016) 00:15:56.946 fused_ordering(1017) 00:15:56.946 fused_ordering(1018) 00:15:56.946 fused_ordering(1019) 00:15:56.946 fused_ordering(1020) 00:15:56.946 fused_ordering(1021) 00:15:56.946 fused_ordering(1022) 00:15:56.946 fused_ordering(1023) 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.946 rmmod nvme_tcp 00:15:56.946 rmmod nvme_fabrics 00:15:56.946 rmmod nvme_keyring 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2495773 ']' 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2495773 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2495773 ']' 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2495773 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495773 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495773' 00:15:56.946 killing process with pid 2495773 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2495773 00:15:56.946 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2495773 00:15:57.204 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.204 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:57.204 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:57.204 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.204 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:57.204 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.204 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.204 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.107 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:59.107 00:15:59.107 real 0m8.974s 00:15:59.107 user 0m5.852s 00:15:59.107 sys 0m4.609s 00:15:59.107 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.107 14:10:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:59.107 ************************************ 00:15:59.107 END TEST nvmf_fused_ordering 00:15:59.107 ************************************ 00:15:59.107 14:10:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:59.107 14:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:59.107 14:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.107 14:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.366 ************************************ 00:15:59.366 START TEST nvmf_ns_masking 00:15:59.366 ************************************ 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:59.366 * Looking for test storage... 00:15:59.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=da9fe27f-4f07-44b0-8701-e8468670c4fd 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=79100f9d-2816-4dbd-96ca-0bf4b3a0c3af 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6f443ddb-155d-481f-b44d-3053fbf855d1 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.366 14:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:01.899 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:01.899 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:01.899 Found net devices under 0000:84:00.0: cvl_0_0 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:01.899 Found net devices under 0000:84:00.1: cvl_0_1 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.899 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.900 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:02.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:16:02.160 00:16:02.160 --- 10.0.0.2 ping statistics --- 00:16:02.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.160 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:16:02.160 00:16:02.160 --- 10.0.0.1 ping statistics --- 00:16:02.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.160 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2498262 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2498262 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2498262 ']' 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.160 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:02.160 [2024-07-26 14:10:18.997676] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:16:02.160 [2024-07-26 14:10:18.997826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.420 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.420 [2024-07-26 14:10:19.103566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.420 [2024-07-26 14:10:19.227931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.420 [2024-07-26 14:10:19.227999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.420 [2024-07-26 14:10:19.228016] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.420 [2024-07-26 14:10:19.228030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.420 [2024-07-26 14:10:19.228042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.420 [2024-07-26 14:10:19.228075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.678 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.678 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:02.678 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.678 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:02.678 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:02.678 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.678 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:02.937 [2024-07-26 14:10:19.694136] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.937 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:02.937 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:02.937 14:10:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:03.503 Malloc1 00:16:03.503 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:03.761 Malloc2 00:16:03.761 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:04.019 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:04.278 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.883 [2024-07-26 14:10:21.443248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.883 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:04.883 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f443ddb-155d-481f-b44d-3053fbf855d1 -a 10.0.0.2 -s 4420 -i 4 00:16:04.883 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:04.883 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:04.883 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.883 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:04.883 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:06.784 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:06.784 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:06.784 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:06.784 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:06.784 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.784 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:06.784 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:06.784 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:07.042 [ 0]:0x1 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7fd7f5fdfe1419fa06811ac56bf5038 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7fd7f5fdfe1419fa06811ac56bf5038 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.042 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:07.300 [ 0]:0x1 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7fd7f5fdfe1419fa06811ac56bf5038 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7fd7f5fdfe1419fa06811ac56bf5038 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.300 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:07.558 [ 1]:0x2 00:16:07.558 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:07.558 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.558 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b0f09f4f0fd4705ba23797e3deeef10 00:16:07.558 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b0f09f4f0fd4705ba23797e3deeef10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.559 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:07.559 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.559 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.816 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:08.074 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:08.074 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f443ddb-155d-481f-b44d-3053fbf855d1 -a 10.0.0.2 -s 4420 -i 4 00:16:08.332 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:08.332 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.332 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.332 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:08.332 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:08.332 14:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:10.232 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:10.232 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:10.232 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.232 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:10.232 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.232 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:10.490 [ 0]:0x2 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b0f09f4f0fd4705ba23797e3deeef10 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b0f09f4f0fd4705ba23797e3deeef10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.490 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:10.748 [ 0]:0x1 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7fd7f5fdfe1419fa06811ac56bf5038 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7fd7f5fdfe1419fa06811ac56bf5038 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:10.748 [ 1]:0x2 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:10.748 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:11.006 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b0f09f4f0fd4705ba23797e3deeef10 00:16:11.006 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b0f09f4f0fd4705ba23797e3deeef10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:11.006 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:11.264 [ 0]:0x2 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b0f09f4f0fd4705ba23797e3deeef10 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b0f09f4f0fd4705ba23797e3deeef10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:11.264 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.521 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:11.780 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:11.780 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6f443ddb-155d-481f-b44d-3053fbf855d1 -a 10.0.0.2 -s 4420 -i 4 00:16:11.780 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:11.780 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.780 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.780 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:11.780 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:11.780 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:14.309 [ 0]:0x1 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d7fd7f5fdfe1419fa06811ac56bf5038 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d7fd7f5fdfe1419fa06811ac56bf5038 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:14.309 [ 1]:0x2 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:14.309 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:14.309 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b0f09f4f0fd4705ba23797e3deeef10 00:16:14.309 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b0f09f4f0fd4705ba23797e3deeef10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:14.309 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:14.567 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:14.826 [ 0]:0x2 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b0f09f4f0fd4705ba23797e3deeef10 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b0f09f4f0fd4705ba23797e3deeef10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:14.826 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:15.085 [2024-07-26 14:10:31.842802] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:15.085 request: 00:16:15.085 { 00:16:15.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.085 "nsid": 2, 00:16:15.085 "host": "nqn.2016-06.io.spdk:host1", 00:16:15.085 "method": "nvmf_ns_remove_host", 00:16:15.085 "req_id": 1 00:16:15.085 } 00:16:15.085 Got JSON-RPC error response 00:16:15.085 response: 00:16:15.085 { 00:16:15.085 "code": -32602, 00:16:15.085 "message": "Invalid parameters" 00:16:15.085 } 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:15.085 [ 0]:0x2 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.085 14:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b0f09f4f0fd4705ba23797e3deeef10 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b0f09f4f0fd4705ba23797e3deeef10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2499908 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2499908 /var/tmp/host.sock 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2499908 ']' 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:15.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.344 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:15.344 [2024-07-26 14:10:32.186500] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:16:15.344 [2024-07-26 14:10:32.186588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499908 ] 00:16:15.344 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.602 [2024-07-26 14:10:32.255206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.602 [2024-07-26 14:10:32.375579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.860 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.860 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:15.860 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.118 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:16.682 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid da9fe27f-4f07-44b0-8701-e8468670c4fd 00:16:16.682 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:16.682 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DA9FE27F4F0744B08701E8468670C4FD -i 00:16:17.248 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 79100f9d-2816-4dbd-96ca-0bf4b3a0c3af 00:16:17.248 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:17.248 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 79100F9D28164DBD96CA0BF4B3A0C3AF -i 00:16:17.506 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:17.764 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:18.330 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:18.330 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:18.896 nvme0n1 00:16:18.896 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:18.896 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:19.462 nvme1n2 00:16:19.462 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:19.462 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:19.462 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:19.462 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:19.462 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:19.768 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:19.768 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:19.768 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:19.768 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:20.057 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ da9fe27f-4f07-44b0-8701-e8468670c4fd == \d\a\9\f\e\2\7\f\-\4\f\0\7\-\4\4\b\0\-\8\7\0\1\-\e\8\4\6\8\6\7\0\c\4\f\d ]] 00:16:20.057 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:20.057 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:20.057 14:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 79100f9d-2816-4dbd-96ca-0bf4b3a0c3af == \7\9\1\0\0\f\9\d\-\2\8\1\6\-\4\d\b\d\-\9\6\c\a\-\0\b\f\4\b\3\a\0\c\3\a\f ]] 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2499908 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2499908 ']' 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2499908 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499908 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499908' 00:16:20.316 killing process with pid 2499908 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2499908 00:16:20.316 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2499908 00:16:20.882 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.140 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:21.140 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:21.140 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.140 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:21.140 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.140 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:21.140 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.140 14:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.140 rmmod nvme_tcp 00:16:21.140 rmmod nvme_fabrics 00:16:21.140 rmmod nvme_keyring 00:16:21.140 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2498262 ']' 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2498262 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2498262 ']' 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2498262 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498262 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498262' 00:16:21.398 killing process with pid 2498262 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2498262 00:16:21.398 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2498262 00:16:21.657 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.657 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.657 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.657 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.657 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.657 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.657 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.657 14:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.190 00:16:24.190 real 0m24.474s 00:16:24.190 user 0m33.941s 00:16:24.190 sys 0m5.142s 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:24.190 ************************************ 00:16:24.190 END TEST nvmf_ns_masking 00:16:24.190 ************************************ 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.190 ************************************ 00:16:24.190 START TEST nvmf_nvme_cli 00:16:24.190 ************************************ 00:16:24.190 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:24.190 * Looking for test storage... 00:16:24.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.191 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:26.724 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:26.724 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:26.724 Found net devices under 0000:84:00.0: cvl_0_0 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:26.724 Found net devices under 0000:84:00.1: cvl_0_1 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.724 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:26.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:16:26.724 00:16:26.725 --- 10.0.0.2 ping statistics --- 00:16:26.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.725 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:16:26.725 00:16:26.725 --- 10.0.0.1 ping statistics --- 00:16:26.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.725 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2502669 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2502669 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2502669 ']' 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.725 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.725 [2024-07-26 14:10:43.407042] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:16:26.725 [2024-07-26 14:10:43.407169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.725 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.725 [2024-07-26 14:10:43.502088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.983 [2024-07-26 14:10:43.631841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.983 [2024-07-26 14:10:43.631895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.983 [2024-07-26 14:10:43.631911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.983 [2024-07-26 14:10:43.631925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.983 [2024-07-26 14:10:43.631938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.983 [2024-07-26 14:10:43.632017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.983 [2024-07-26 14:10:43.632091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.983 [2024-07-26 14:10:43.632121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.983 [2024-07-26 14:10:43.632140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.983 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:26.983 [2024-07-26 14:10:43.865452] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 Malloc0 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 Malloc1 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 [2024-07-26 14:10:43.953527] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.242 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:16:27.500 00:16:27.500 Discovery Log Number of Records 2, Generation counter 2 00:16:27.500 =====Discovery Log Entry 0====== 00:16:27.500 trtype: tcp 00:16:27.500 adrfam: ipv4 00:16:27.500 subtype: current discovery subsystem 00:16:27.500 treq: not required 00:16:27.500 portid: 0 00:16:27.500 trsvcid: 4420 00:16:27.500 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:27.500 traddr: 10.0.0.2 00:16:27.500 eflags: explicit discovery connections, duplicate discovery information 00:16:27.500 sectype: none 00:16:27.500 =====Discovery Log Entry 1====== 00:16:27.500 trtype: tcp 00:16:27.500 adrfam: ipv4 00:16:27.500 subtype: nvme subsystem 00:16:27.500 treq: not required 00:16:27.500 portid: 0 00:16:27.500 trsvcid: 4420 00:16:27.500 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:27.500 traddr: 10.0.0.2 00:16:27.500 eflags: none 00:16:27.500 sectype: none 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:27.500 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.067 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:28.067 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.067 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.067 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:28.067 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:28.067 14:10:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:29.968 /dev/nvme0n1 ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:29.968 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:30.226 rmmod nvme_tcp 00:16:30.226 rmmod nvme_fabrics 00:16:30.226 rmmod nvme_keyring 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2502669 ']' 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2502669 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2502669 ']' 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2502669 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.226 14:10:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2502669 00:16:30.226 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.227 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.227 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2502669' 00:16:30.227 killing process with pid 2502669 00:16:30.227 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2502669 00:16:30.227 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2502669 00:16:30.485 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:30.485 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:30.485 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:30.485 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.485 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.485 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.485 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.485 14:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:33.017 00:16:33.017 real 0m8.868s 00:16:33.017 user 0m15.508s 00:16:33.017 sys 0m2.706s 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:33.017 ************************************ 00:16:33.017 END TEST nvmf_nvme_cli 00:16:33.017 ************************************ 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:33.017 ************************************ 00:16:33.017 START TEST nvmf_vfio_user 00:16:33.017 ************************************ 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:33.017 * Looking for test storage... 00:16:33.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2503471 00:16:33.017 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2503471' 00:16:33.017 Process pid: 2503471 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2503471 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2503471 ']' 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.018 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:33.018 [2024-07-26 14:10:49.626839] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:16:33.018 [2024-07-26 14:10:49.626937] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.018 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.018 [2024-07-26 14:10:49.698858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.018 [2024-07-26 14:10:49.822212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.018 [2024-07-26 14:10:49.822271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.018 [2024-07-26 14:10:49.822286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.018 [2024-07-26 14:10:49.822300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.018 [2024-07-26 14:10:49.822312] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.018 [2024-07-26 14:10:49.823453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.018 [2024-07-26 14:10:49.823482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.018 [2024-07-26 14:10:49.823534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.018 [2024-07-26 14:10:49.823537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.276 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.276 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:33.276 14:10:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:34.207 14:10:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:34.465 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:34.465 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:34.465 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:34.465 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:34.465 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:35.030 Malloc1 00:16:35.030 14:10:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:35.594 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:35.852 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:36.109 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:36.109 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:36.109 14:10:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:36.727 Malloc2 00:16:36.727 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:37.292 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:37.550 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:37.808 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:37.808 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:37.808 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:37.808 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:37.808 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:37.809 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:37.809 [2024-07-26 14:10:54.613752] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:16:37.809 [2024-07-26 14:10:54.613815] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504028 ] 00:16:37.809 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.809 [2024-07-26 14:10:54.659287] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:37.809 [2024-07-26 14:10:54.661759] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:37.809 [2024-07-26 14:10:54.661792] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbae5669000 00:16:37.809 [2024-07-26 14:10:54.662758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:37.809 [2024-07-26 14:10:54.663758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:37.809 [2024-07-26 14:10:54.664764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:37.809 [2024-07-26 14:10:54.665767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:37.809 [2024-07-26 14:10:54.666771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:37.809 [2024-07-26 14:10:54.667776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:37.809 [2024-07-26 14:10:54.668776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:37.809 [2024-07-26 14:10:54.669784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:37.809 [2024-07-26 14:10:54.670793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:37.809 [2024-07-26 14:10:54.670815] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbae565e000 00:16:37.809 [2024-07-26 14:10:54.672091] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:37.809 [2024-07-26 14:10:54.691134] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:37.809 [2024-07-26 14:10:54.691176] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:37.809 [2024-07-26 14:10:54.693935] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:37.809 [2024-07-26 14:10:54.694004] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:37.809 [2024-07-26 14:10:54.694126] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:37.809 [2024-07-26 14:10:54.694163] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:37.809 [2024-07-26 14:10:54.694176] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:37.809 [2024-07-26 14:10:54.694926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:37.809 [2024-07-26 14:10:54.694954] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:37.809 [2024-07-26 14:10:54.694970] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:38.069 [2024-07-26 14:10:54.695930] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:38.069 [2024-07-26 14:10:54.695952] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:38.069 [2024-07-26 14:10:54.695968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:38.069 [2024-07-26 14:10:54.696934] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:38.069 [2024-07-26 14:10:54.696956] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:38.069 [2024-07-26 14:10:54.697936] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:38.069 [2024-07-26 14:10:54.697957] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:38.069 [2024-07-26 14:10:54.697967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:38.069 [2024-07-26 14:10:54.697980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:38.069 [2024-07-26 14:10:54.698091] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:38.069 [2024-07-26 14:10:54.698101] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:38.069 [2024-07-26 14:10:54.698111] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:38.069 [2024-07-26 14:10:54.702440] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:38.069 [2024-07-26 14:10:54.702968] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:38.069 [2024-07-26 14:10:54.703979] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:38.069 [2024-07-26 14:10:54.704974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:38.069 [2024-07-26 14:10:54.705077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:38.069 [2024-07-26 14:10:54.705992] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:38.069 [2024-07-26 14:10:54.706013] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:38.069 [2024-07-26 14:10:54.706024] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706052] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:38.069 [2024-07-26 14:10:54.706068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706101] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:38.069 [2024-07-26 14:10:54.706112] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:38.069 [2024-07-26 14:10:54.706120] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.069 [2024-07-26 14:10:54.706149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:38.069 [2024-07-26 14:10:54.706204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:38.069 [2024-07-26 14:10:54.706225] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:38.069 [2024-07-26 14:10:54.706235] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:38.069 [2024-07-26 14:10:54.706244] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:38.069 [2024-07-26 14:10:54.706252] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:38.069 [2024-07-26 14:10:54.706261] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:38.069 [2024-07-26 14:10:54.706270] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:38.069 [2024-07-26 14:10:54.706279] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706295] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:38.069 [2024-07-26 14:10:54.706337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:38.069 [2024-07-26 14:10:54.706363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.069 [2024-07-26 14:10:54.706378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.069 [2024-07-26 14:10:54.706392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.069 [2024-07-26 14:10:54.706406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.069 [2024-07-26 14:10:54.706416] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706442] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:38.069 [2024-07-26 14:10:54.706475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:38.069 [2024-07-26 14:10:54.706488] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:38.069 [2024-07-26 14:10:54.706498] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706517] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:38.069 [2024-07-26 14:10:54.706563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:38.069 [2024-07-26 14:10:54.706640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706659] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706675] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:38.069 [2024-07-26 14:10:54.706684] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:38.069 [2024-07-26 14:10:54.706691] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.069 [2024-07-26 14:10:54.706702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:38.069 [2024-07-26 14:10:54.706720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:38.069 [2024-07-26 14:10:54.706741] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:38.069 [2024-07-26 14:10:54.706766] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:38.069 [2024-07-26 14:10:54.706797] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:38.070 [2024-07-26 14:10:54.706807] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:38.070 [2024-07-26 14:10:54.706814] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.070 [2024-07-26 14:10:54.706824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.706853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.706880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.706897] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.706912] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:38.070 [2024-07-26 14:10:54.706921] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:38.070 [2024-07-26 14:10:54.706928] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.070 [2024-07-26 14:10:54.706938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.706956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.706973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.706986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.707003] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.707022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.707033] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.707043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.707054] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:38.070 [2024-07-26 14:10:54.707063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:38.070 [2024-07-26 14:10:54.707073] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:38.070 [2024-07-26 14:10:54.707106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.707127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.707149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.707163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.707181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.707198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.707216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.707228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.707255] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:38.070 [2024-07-26 14:10:54.707267] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:38.070 [2024-07-26 14:10:54.707274] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:38.070 [2024-07-26 14:10:54.707281] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:38.070 [2024-07-26 14:10:54.707287] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:38.070 [2024-07-26 14:10:54.707298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:38.070 [2024-07-26 14:10:54.707311] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:38.070 [2024-07-26 14:10:54.707320] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:38.070 [2024-07-26 14:10:54.707327] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.070 [2024-07-26 14:10:54.707337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.707349] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:38.070 [2024-07-26 14:10:54.707358] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:38.070 [2024-07-26 14:10:54.707365] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.070 [2024-07-26 14:10:54.707375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.707393] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:38.070 [2024-07-26 14:10:54.707403] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:38.070 [2024-07-26 14:10:54.707410] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.070 [2024-07-26 14:10:54.707420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:38.070 [2024-07-26 14:10:54.707441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.707466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.707489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:38.070 [2024-07-26 14:10:54.707502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:38.070 ===================================================== 00:16:38.070 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:38.070 ===================================================== 00:16:38.070 Controller Capabilities/Features 00:16:38.070 ================================ 00:16:38.070 Vendor ID: 4e58 00:16:38.070 Subsystem Vendor ID: 4e58 00:16:38.070 Serial Number: SPDK1 00:16:38.070 Model Number: SPDK bdev Controller 00:16:38.070 Firmware Version: 24.09 00:16:38.070 Recommended Arb Burst: 6 00:16:38.070 IEEE OUI Identifier: 8d 6b 50 00:16:38.070 Multi-path I/O 00:16:38.070 May have multiple subsystem ports: Yes 00:16:38.070 May have multiple controllers: Yes 00:16:38.070 Associated with SR-IOV VF: No 00:16:38.070 Max Data Transfer Size: 131072 00:16:38.070 Max Number of Namespaces: 32 00:16:38.070 Max Number of I/O Queues: 127 00:16:38.070 NVMe Specification Version (VS): 1.3 00:16:38.070 NVMe Specification Version (Identify): 1.3 00:16:38.070 Maximum Queue Entries: 256 00:16:38.070 Contiguous Queues Required: Yes 00:16:38.070 Arbitration Mechanisms Supported 00:16:38.070 Weighted Round Robin: Not Supported 00:16:38.070 Vendor Specific: Not Supported 00:16:38.070 Reset Timeout: 15000 ms 00:16:38.070 Doorbell Stride: 4 bytes 00:16:38.070 NVM Subsystem Reset: Not Supported 00:16:38.070 Command Sets Supported 00:16:38.070 NVM Command Set: Supported 00:16:38.070 Boot Partition: Not Supported 00:16:38.070 Memory Page Size Minimum: 4096 bytes 00:16:38.070 Memory Page Size Maximum: 4096 bytes 00:16:38.070 Persistent Memory Region: Not Supported 00:16:38.070 Optional Asynchronous Events Supported 00:16:38.070 Namespace Attribute Notices: Supported 00:16:38.070 Firmware Activation Notices: Not Supported 00:16:38.070 ANA Change Notices: Not Supported 00:16:38.070 PLE Aggregate Log Change Notices: Not Supported 00:16:38.070 LBA Status Info Alert Notices: Not Supported 00:16:38.070 EGE Aggregate Log Change Notices: Not Supported 00:16:38.070 Normal NVM Subsystem Shutdown event: Not Supported 00:16:38.070 Zone Descriptor Change Notices: Not Supported 00:16:38.070 Discovery Log Change Notices: Not Supported 00:16:38.070 Controller Attributes 00:16:38.070 128-bit Host Identifier: Supported 00:16:38.070 Non-Operational Permissive Mode: Not Supported 00:16:38.070 NVM Sets: Not Supported 00:16:38.070 Read Recovery Levels: Not Supported 00:16:38.070 Endurance Groups: Not Supported 00:16:38.070 Predictable Latency Mode: Not Supported 00:16:38.070 Traffic Based Keep ALive: Not Supported 00:16:38.070 Namespace Granularity: Not Supported 00:16:38.070 SQ Associations: Not Supported 00:16:38.070 UUID List: Not Supported 00:16:38.070 Multi-Domain Subsystem: Not Supported 00:16:38.070 Fixed Capacity Management: Not Supported 00:16:38.070 Variable Capacity Management: Not Supported 00:16:38.070 Delete Endurance Group: Not Supported 00:16:38.070 Delete NVM Set: Not Supported 00:16:38.071 Extended LBA Formats Supported: Not Supported 00:16:38.071 Flexible Data Placement Supported: Not Supported 00:16:38.071 00:16:38.071 Controller Memory Buffer Support 00:16:38.071 ================================ 00:16:38.071 Supported: No 00:16:38.071 00:16:38.071 Persistent Memory Region Support 00:16:38.071 ================================ 00:16:38.071 Supported: No 00:16:38.071 00:16:38.071 Admin Command Set Attributes 00:16:38.071 ============================ 00:16:38.071 Security Send/Receive: Not Supported 00:16:38.071 Format NVM: Not Supported 00:16:38.071 Firmware Activate/Download: Not Supported 00:16:38.071 Namespace Management: Not Supported 00:16:38.071 Device Self-Test: Not Supported 00:16:38.071 Directives: Not Supported 00:16:38.071 NVMe-MI: Not Supported 00:16:38.071 Virtualization Management: Not Supported 00:16:38.071 Doorbell Buffer Config: Not Supported 00:16:38.071 Get LBA Status Capability: Not Supported 00:16:38.071 Command & Feature Lockdown Capability: Not Supported 00:16:38.071 Abort Command Limit: 4 00:16:38.071 Async Event Request Limit: 4 00:16:38.071 Number of Firmware Slots: N/A 00:16:38.071 Firmware Slot 1 Read-Only: N/A 00:16:38.071 Firmware Activation Without Reset: N/A 00:16:38.071 Multiple Update Detection Support: N/A 00:16:38.071 Firmware Update Granularity: No Information Provided 00:16:38.071 Per-Namespace SMART Log: No 00:16:38.071 Asymmetric Namespace Access Log Page: Not Supported 00:16:38.071 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:38.071 Command Effects Log Page: Supported 00:16:38.071 Get Log Page Extended Data: Supported 00:16:38.071 Telemetry Log Pages: Not Supported 00:16:38.071 Persistent Event Log Pages: Not Supported 00:16:38.071 Supported Log Pages Log Page: May Support 00:16:38.071 Commands Supported & Effects Log Page: Not Supported 00:16:38.071 Feature Identifiers & Effects Log Page:May Support 00:16:38.071 NVMe-MI Commands & Effects Log Page: May Support 00:16:38.071 Data Area 4 for Telemetry Log: Not Supported 00:16:38.071 Error Log Page Entries Supported: 128 00:16:38.071 Keep Alive: Supported 00:16:38.071 Keep Alive Granularity: 10000 ms 00:16:38.071 00:16:38.071 NVM Command Set Attributes 00:16:38.071 ========================== 00:16:38.071 Submission Queue Entry Size 00:16:38.071 Max: 64 00:16:38.071 Min: 64 00:16:38.071 Completion Queue Entry Size 00:16:38.071 Max: 16 00:16:38.071 Min: 16 00:16:38.071 Number of Namespaces: 32 00:16:38.071 Compare Command: Supported 00:16:38.071 Write Uncorrectable Command: Not Supported 00:16:38.071 Dataset Management Command: Supported 00:16:38.071 Write Zeroes Command: Supported 00:16:38.071 Set Features Save Field: Not Supported 00:16:38.071 Reservations: Not Supported 00:16:38.071 Timestamp: Not Supported 00:16:38.071 Copy: Supported 00:16:38.071 Volatile Write Cache: Present 00:16:38.071 Atomic Write Unit (Normal): 1 00:16:38.071 Atomic Write Unit (PFail): 1 00:16:38.071 Atomic Compare & Write Unit: 1 00:16:38.071 Fused Compare & Write: Supported 00:16:38.071 Scatter-Gather List 00:16:38.071 SGL Command Set: Supported (Dword aligned) 00:16:38.071 SGL Keyed: Not Supported 00:16:38.071 SGL Bit Bucket Descriptor: Not Supported 00:16:38.071 SGL Metadata Pointer: Not Supported 00:16:38.071 Oversized SGL: Not Supported 00:16:38.071 SGL Metadata Address: Not Supported 00:16:38.071 SGL Offset: Not Supported 00:16:38.071 Transport SGL Data Block: Not Supported 00:16:38.071 Replay Protected Memory Block: Not Supported 00:16:38.071 00:16:38.071 Firmware Slot Information 00:16:38.071 ========================= 00:16:38.071 Active slot: 1 00:16:38.071 Slot 1 Firmware Revision: 24.09 00:16:38.071 00:16:38.071 00:16:38.071 Commands Supported and Effects 00:16:38.071 ============================== 00:16:38.071 Admin Commands 00:16:38.071 -------------- 00:16:38.071 Get Log Page (02h): Supported 00:16:38.071 Identify (06h): Supported 00:16:38.071 Abort (08h): Supported 00:16:38.071 Set Features (09h): Supported 00:16:38.071 Get Features (0Ah): Supported 00:16:38.071 Asynchronous Event Request (0Ch): Supported 00:16:38.071 Keep Alive (18h): Supported 00:16:38.071 I/O Commands 00:16:38.071 ------------ 00:16:38.071 Flush (00h): Supported LBA-Change 00:16:38.071 Write (01h): Supported LBA-Change 00:16:38.071 Read (02h): Supported 00:16:38.071 Compare (05h): Supported 00:16:38.071 Write Zeroes (08h): Supported LBA-Change 00:16:38.071 Dataset Management (09h): Supported LBA-Change 00:16:38.071 Copy (19h): Supported LBA-Change 00:16:38.071 00:16:38.071 Error Log 00:16:38.071 ========= 00:16:38.071 00:16:38.071 Arbitration 00:16:38.071 =========== 00:16:38.071 Arbitration Burst: 1 00:16:38.071 00:16:38.071 Power Management 00:16:38.071 ================ 00:16:38.071 Number of Power States: 1 00:16:38.071 Current Power State: Power State #0 00:16:38.071 Power State #0: 00:16:38.071 Max Power: 0.00 W 00:16:38.071 Non-Operational State: Operational 00:16:38.071 Entry Latency: Not Reported 00:16:38.071 Exit Latency: Not Reported 00:16:38.071 Relative Read Throughput: 0 00:16:38.071 Relative Read Latency: 0 00:16:38.071 Relative Write Throughput: 0 00:16:38.071 Relative Write Latency: 0 00:16:38.071 Idle Power: Not Reported 00:16:38.071 Active Power: Not Reported 00:16:38.071 Non-Operational Permissive Mode: Not Supported 00:16:38.071 00:16:38.071 Health Information 00:16:38.071 ================== 00:16:38.071 Critical Warnings: 00:16:38.071 Available Spare Space: OK 00:16:38.071 Temperature: OK 00:16:38.071 Device Reliability: OK 00:16:38.071 Read Only: No 00:16:38.071 Volatile Memory Backup: OK 00:16:38.071 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:38.071 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:38.071 Available Spare: 0% 00:16:38.071 Available Sp[2024-07-26 14:10:54.707646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:38.071 [2024-07-26 14:10:54.707665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:38.071 [2024-07-26 14:10:54.707717] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:38.071 [2024-07-26 14:10:54.707738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.071 [2024-07-26 14:10:54.707750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.071 [2024-07-26 14:10:54.707761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.071 [2024-07-26 14:10:54.707772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.071 [2024-07-26 14:10:54.708001] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:38.071 [2024-07-26 14:10:54.708028] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:38.071 [2024-07-26 14:10:54.709001] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:38.071 [2024-07-26 14:10:54.709084] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:38.071 [2024-07-26 14:10:54.709101] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:38.071 [2024-07-26 14:10:54.710011] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:38.071 [2024-07-26 14:10:54.710037] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:38.071 [2024-07-26 14:10:54.710104] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:38.071 [2024-07-26 14:10:54.713441] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:38.071 are Threshold: 0% 00:16:38.071 Life Percentage Used: 0% 00:16:38.071 Data Units Read: 0 00:16:38.071 Data Units Written: 0 00:16:38.071 Host Read Commands: 0 00:16:38.071 Host Write Commands: 0 00:16:38.071 Controller Busy Time: 0 minutes 00:16:38.071 Power Cycles: 0 00:16:38.071 Power On Hours: 0 hours 00:16:38.071 Unsafe Shutdowns: 0 00:16:38.072 Unrecoverable Media Errors: 0 00:16:38.072 Lifetime Error Log Entries: 0 00:16:38.072 Warning Temperature Time: 0 minutes 00:16:38.072 Critical Temperature Time: 0 minutes 00:16:38.072 00:16:38.072 Number of Queues 00:16:38.072 ================ 00:16:38.072 Number of I/O Submission Queues: 127 00:16:38.072 Number of I/O Completion Queues: 127 00:16:38.072 00:16:38.072 Active Namespaces 00:16:38.072 ================= 00:16:38.072 Namespace ID:1 00:16:38.072 Error Recovery Timeout: Unlimited 00:16:38.072 Command Set Identifier: NVM (00h) 00:16:38.072 Deallocate: Supported 00:16:38.072 Deallocated/Unwritten Error: Not Supported 00:16:38.072 Deallocated Read Value: Unknown 00:16:38.072 Deallocate in Write Zeroes: Not Supported 00:16:38.072 Deallocated Guard Field: 0xFFFF 00:16:38.072 Flush: Supported 00:16:38.072 Reservation: Supported 00:16:38.072 Namespace Sharing Capabilities: Multiple Controllers 00:16:38.072 Size (in LBAs): 131072 (0GiB) 00:16:38.072 Capacity (in LBAs): 131072 (0GiB) 00:16:38.072 Utilization (in LBAs): 131072 (0GiB) 00:16:38.072 NGUID: 01F89151B43346D794FAC7CF8AA31F11 00:16:38.072 UUID: 01f89151-b433-46d7-94fa-c7cf8aa31f11 00:16:38.072 Thin Provisioning: Not Supported 00:16:38.072 Per-NS Atomic Units: Yes 00:16:38.072 Atomic Boundary Size (Normal): 0 00:16:38.072 Atomic Boundary Size (PFail): 0 00:16:38.072 Atomic Boundary Offset: 0 00:16:38.072 Maximum Single Source Range Length: 65535 00:16:38.072 Maximum Copy Length: 65535 00:16:38.072 Maximum Source Range Count: 1 00:16:38.072 NGUID/EUI64 Never Reused: No 00:16:38.072 Namespace Write Protected: No 00:16:38.072 Number of LBA Formats: 1 00:16:38.072 Current LBA Format: LBA Format #00 00:16:38.072 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:38.072 00:16:38.072 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:38.072 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.330 [2024-07-26 14:10:54.974403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:43.598 Initializing NVMe Controllers 00:16:43.598 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:43.598 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:43.598 Initialization complete. Launching workers. 00:16:43.598 ======================================================== 00:16:43.598 Latency(us) 00:16:43.598 Device Information : IOPS MiB/s Average min max 00:16:43.598 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 26948.54 105.27 4748.92 1350.07 8476.86 00:16:43.598 ======================================================== 00:16:43.598 Total : 26948.54 105.27 4748.92 1350.07 8476.86 00:16:43.598 00:16:43.598 [2024-07-26 14:10:59.997282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:43.598 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:43.598 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.598 [2024-07-26 14:11:00.263612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:48.862 Initializing NVMe Controllers 00:16:48.862 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:48.862 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:48.862 Initialization complete. Launching workers. 00:16:48.862 ======================================================== 00:16:48.862 Latency(us) 00:16:48.862 Device Information : IOPS MiB/s Average min max 00:16:48.862 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15987.17 62.45 8011.51 7656.17 15994.26 00:16:48.862 ======================================================== 00:16:48.862 Total : 15987.17 62.45 8011.51 7656.17 15994.26 00:16:48.862 00:16:48.862 [2024-07-26 14:11:05.301258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:48.862 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:48.862 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.862 [2024-07-26 14:11:05.570500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:54.129 [2024-07-26 14:11:10.654839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:54.129 Initializing NVMe Controllers 00:16:54.129 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:54.129 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:54.129 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:54.129 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:54.129 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:54.129 Initialization complete. Launching workers. 00:16:54.129 Starting thread on core 2 00:16:54.129 Starting thread on core 3 00:16:54.129 Starting thread on core 1 00:16:54.129 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:54.129 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.388 [2024-07-26 14:11:11.021938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:57.673 [2024-07-26 14:11:14.389756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:57.673 Initializing NVMe Controllers 00:16:57.673 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.673 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.673 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:57.673 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:57.673 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:57.673 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:57.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:57.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:57.673 Initialization complete. Launching workers. 00:16:57.673 Starting thread on core 1 with urgent priority queue 00:16:57.673 Starting thread on core 2 with urgent priority queue 00:16:57.673 Starting thread on core 3 with urgent priority queue 00:16:57.673 Starting thread on core 0 with urgent priority queue 00:16:57.673 SPDK bdev Controller (SPDK1 ) core 0: 4205.67 IO/s 23.78 secs/100000 ios 00:16:57.673 SPDK bdev Controller (SPDK1 ) core 1: 4055.00 IO/s 24.66 secs/100000 ios 00:16:57.673 SPDK bdev Controller (SPDK1 ) core 2: 3978.33 IO/s 25.14 secs/100000 ios 00:16:57.673 SPDK bdev Controller (SPDK1 ) core 3: 4426.33 IO/s 22.59 secs/100000 ios 00:16:57.673 ======================================================== 00:16:57.673 00:16:57.673 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:57.673 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.931 [2024-07-26 14:11:14.719980] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:57.931 Initializing NVMe Controllers 00:16:57.931 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.931 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.931 Namespace ID: 1 size: 0GB 00:16:57.931 Initialization complete. 00:16:57.931 INFO: using host memory buffer for IO 00:16:57.931 Hello world! 00:16:57.931 [2024-07-26 14:11:14.753666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:57.931 14:11:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:58.189 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.189 [2024-07-26 14:11:15.058714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:59.565 Initializing NVMe Controllers 00:16:59.565 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:59.565 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:59.565 Initialization complete. Launching workers. 00:16:59.565 submit (in ns) avg, min, max = 11420.3, 4200.0, 4022885.9 00:16:59.565 complete (in ns) avg, min, max = 25916.3, 2444.4, 5995859.3 00:16:59.565 00:16:59.565 Submit histogram 00:16:59.565 ================ 00:16:59.565 Range in us Cumulative Count 00:16:59.565 4.196 - 4.219: 0.0084% ( 1) 00:16:59.565 4.219 - 4.243: 0.0671% ( 7) 00:16:59.565 4.243 - 4.267: 0.4280% ( 43) 00:16:59.565 4.267 - 4.290: 1.9637% ( 183) 00:16:59.565 4.290 - 4.314: 6.1262% ( 496) 00:16:59.565 4.314 - 4.338: 13.3518% ( 861) 00:16:59.565 4.338 - 4.361: 21.7607% ( 1002) 00:16:59.565 4.361 - 4.385: 28.6254% ( 818) 00:16:59.565 4.385 - 4.409: 32.4773% ( 459) 00:16:59.565 4.409 - 4.433: 34.2732% ( 214) 00:16:59.565 4.433 - 4.456: 35.2467% ( 116) 00:16:59.565 4.456 - 4.480: 36.0272% ( 93) 00:16:59.565 4.480 - 4.504: 37.6888% ( 198) 00:16:59.565 4.504 - 4.527: 40.9282% ( 386) 00:16:59.565 4.527 - 4.551: 45.3676% ( 529) 00:16:59.565 4.551 - 4.575: 49.5804% ( 502) 00:16:59.565 4.575 - 4.599: 52.3834% ( 334) 00:16:59.565 4.599 - 4.622: 54.0282% ( 196) 00:16:59.565 4.622 - 4.646: 55.0772% ( 125) 00:16:59.565 4.646 - 4.670: 55.6227% ( 65) 00:16:59.565 4.670 - 4.693: 56.0171% ( 47) 00:16:59.565 4.693 - 4.717: 56.5794% ( 67) 00:16:59.565 4.717 - 4.741: 57.5445% ( 115) 00:16:59.565 4.741 - 4.764: 58.2242% ( 81) 00:16:59.565 4.764 - 4.788: 58.8704% ( 77) 00:16:59.565 4.788 - 4.812: 59.0131% ( 17) 00:16:59.565 4.812 - 4.836: 59.1558% ( 17) 00:16:59.565 4.836 - 4.859: 59.4243% ( 32) 00:16:59.565 4.859 - 4.883: 59.9782% ( 66) 00:16:59.565 4.883 - 4.907: 63.0329% ( 364) 00:16:59.565 4.907 - 4.930: 77.2994% ( 1700) 00:16:59.565 4.930 - 4.954: 86.1195% ( 1051) 00:16:59.565 4.954 - 4.978: 94.6123% ( 1012) 00:16:59.565 4.978 - 5.001: 95.9634% ( 161) 00:16:59.565 5.001 - 5.025: 96.3327% ( 44) 00:16:59.565 5.025 - 5.049: 96.5844% ( 30) 00:16:59.565 5.049 - 5.073: 96.7691% ( 22) 00:16:59.565 5.073 - 5.096: 96.8614% ( 11) 00:16:59.565 5.096 - 5.120: 96.9872% ( 15) 00:16:59.565 5.120 - 5.144: 97.1131% ( 15) 00:16:59.565 5.144 - 5.167: 97.2390% ( 15) 00:16:59.565 5.167 - 5.191: 97.4824% ( 29) 00:16:59.565 5.191 - 5.215: 97.7593% ( 33) 00:16:59.565 5.215 - 5.239: 97.8768% ( 14) 00:16:59.565 5.239 - 5.262: 97.9523% ( 9) 00:16:59.565 5.262 - 5.286: 98.0363% ( 10) 00:16:59.565 5.286 - 5.310: 98.0866% ( 6) 00:16:59.565 5.310 - 5.333: 98.1286% ( 5) 00:16:59.565 5.333 - 5.357: 98.1957% ( 8) 00:16:59.565 5.357 - 5.381: 98.2461% ( 6) 00:16:59.565 5.381 - 5.404: 98.3048% ( 7) 00:16:59.565 5.404 - 5.428: 98.3468% ( 5) 00:16:59.565 5.428 - 5.452: 98.3887% ( 5) 00:16:59.565 5.452 - 5.476: 98.4475% ( 7) 00:16:59.565 5.476 - 5.499: 98.4726% ( 3) 00:16:59.565 5.499 - 5.523: 98.5230% ( 6) 00:16:59.565 5.547 - 5.570: 98.5482% ( 3) 00:16:59.565 5.570 - 5.594: 98.5733% ( 3) 00:16:59.565 5.594 - 5.618: 98.6153% ( 5) 00:16:59.565 5.618 - 5.641: 98.6573% ( 5) 00:16:59.565 5.641 - 5.665: 98.6657% ( 1) 00:16:59.565 5.665 - 5.689: 98.6824% ( 2) 00:16:59.565 5.689 - 5.713: 98.7076% ( 3) 00:16:59.565 5.736 - 5.760: 98.7160% ( 1) 00:16:59.565 5.760 - 5.784: 98.7412% ( 3) 00:16:59.565 5.784 - 5.807: 98.7580% ( 2) 00:16:59.565 5.807 - 5.831: 98.7915% ( 4) 00:16:59.565 5.831 - 5.855: 98.7999% ( 1) 00:16:59.565 5.855 - 5.879: 98.8251% ( 3) 00:16:59.565 5.879 - 5.902: 98.8503% ( 3) 00:16:59.565 5.973 - 5.997: 98.8587% ( 1) 00:16:59.565 5.997 - 6.021: 98.8671% ( 1) 00:16:59.565 6.305 - 6.353: 98.8755% ( 1) 00:16:59.565 6.447 - 6.495: 98.8839% ( 1) 00:16:59.565 6.637 - 6.684: 98.8922% ( 1) 00:16:59.565 6.874 - 6.921: 98.9006% ( 1) 00:16:59.565 6.969 - 7.016: 98.9090% ( 1) 00:16:59.565 7.016 - 7.064: 98.9174% ( 1) 00:16:59.565 7.253 - 7.301: 98.9258% ( 1) 00:16:59.565 7.396 - 7.443: 98.9342% ( 1) 00:16:59.565 7.633 - 7.680: 98.9510% ( 2) 00:16:59.565 7.727 - 7.775: 98.9594% ( 1) 00:16:59.565 7.775 - 7.822: 98.9762% ( 2) 00:16:59.565 7.870 - 7.917: 98.9846% ( 1) 00:16:59.565 8.012 - 8.059: 99.0013% ( 2) 00:16:59.565 8.107 - 8.154: 99.0097% ( 1) 00:16:59.565 8.201 - 8.249: 99.0181% ( 1) 00:16:59.565 8.391 - 8.439: 99.0265% ( 1) 00:16:59.565 8.439 - 8.486: 99.0433% ( 2) 00:16:59.565 8.486 - 8.533: 99.0601% ( 2) 00:16:59.565 8.533 - 8.581: 99.0769% ( 2) 00:16:59.565 8.581 - 8.628: 99.0937% ( 2) 00:16:59.565 8.628 - 8.676: 99.1020% ( 1) 00:16:59.565 8.723 - 8.770: 99.1524% ( 6) 00:16:59.565 8.770 - 8.818: 99.1608% ( 1) 00:16:59.565 8.865 - 8.913: 99.1776% ( 2) 00:16:59.565 8.913 - 8.960: 99.1860% ( 1) 00:16:59.565 9.055 - 9.102: 99.1944% ( 1) 00:16:59.565 9.102 - 9.150: 99.2028% ( 1) 00:16:59.565 9.150 - 9.197: 99.2195% ( 2) 00:16:59.565 9.197 - 9.244: 99.2279% ( 1) 00:16:59.565 9.244 - 9.292: 99.2363% ( 1) 00:16:59.565 9.292 - 9.339: 99.2447% ( 1) 00:16:59.565 9.481 - 9.529: 99.2615% ( 2) 00:16:59.565 9.529 - 9.576: 99.2699% ( 1) 00:16:59.565 9.624 - 9.671: 99.2951% ( 3) 00:16:59.565 9.719 - 9.766: 99.3118% ( 2) 00:16:59.565 9.766 - 9.813: 99.3202% ( 1) 00:16:59.565 9.813 - 9.861: 99.3370% ( 2) 00:16:59.565 9.861 - 9.908: 99.3454% ( 1) 00:16:59.565 9.956 - 10.003: 99.3538% ( 1) 00:16:59.565 10.240 - 10.287: 99.3622% ( 1) 00:16:59.565 10.287 - 10.335: 99.3790% ( 2) 00:16:59.565 10.572 - 10.619: 99.3958% ( 2) 00:16:59.565 10.619 - 10.667: 99.4042% ( 1) 00:16:59.565 10.667 - 10.714: 99.4126% ( 1) 00:16:59.565 10.714 - 10.761: 99.4293% ( 2) 00:16:59.565 10.761 - 10.809: 99.4461% ( 2) 00:16:59.565 11.046 - 11.093: 99.4545% ( 1) 00:16:59.565 11.283 - 11.330: 99.4629% ( 1) 00:16:59.565 11.425 - 11.473: 99.4713% ( 1) 00:16:59.565 11.473 - 11.520: 99.4881% ( 2) 00:16:59.565 11.567 - 11.615: 99.4965% ( 1) 00:16:59.565 11.615 - 11.662: 99.5049% ( 1) 00:16:59.565 11.710 - 11.757: 99.5300% ( 3) 00:16:59.565 11.757 - 11.804: 99.5384% ( 1) 00:16:59.565 11.994 - 12.041: 99.5468% ( 1) 00:16:59.565 12.231 - 12.326: 99.5552% ( 1) 00:16:59.565 12.610 - 12.705: 99.5636% ( 1) 00:16:59.565 12.705 - 12.800: 99.5720% ( 1) 00:16:59.565 12.800 - 12.895: 99.5804% ( 1) 00:16:59.565 12.895 - 12.990: 99.5888% ( 1) 00:16:59.565 12.990 - 13.084: 99.5972% ( 1) 00:16:59.565 13.179 - 13.274: 99.6056% ( 1) 00:16:59.565 13.274 - 13.369: 99.6140% ( 1) 00:16:59.565 13.369 - 13.464: 99.6307% ( 2) 00:16:59.565 13.464 - 13.559: 99.6559% ( 3) 00:16:59.565 13.559 - 13.653: 99.6727% ( 2) 00:16:59.565 13.748 - 13.843: 99.6811% ( 1) 00:16:59.565 13.843 - 13.938: 99.6895% ( 1) 00:16:59.565 13.938 - 14.033: 99.6979% ( 1) 00:16:59.565 14.127 - 14.222: 99.7147% ( 2) 00:16:59.565 14.317 - 14.412: 99.7315% ( 2) 00:16:59.565 14.507 - 14.601: 99.7398% ( 1) 00:16:59.565 14.696 - 14.791: 99.7482% ( 1) 00:16:59.566 14.886 - 14.981: 99.7650% ( 2) 00:16:59.566 14.981 - 15.076: 99.7818% ( 2) 00:16:59.566 15.076 - 15.170: 99.7902% ( 1) 00:16:59.566 15.265 - 15.360: 99.7986% ( 1) 00:16:59.566 15.360 - 15.455: 99.8070% ( 1) 00:16:59.566 15.644 - 15.739: 99.8154% ( 1) 00:16:59.566 15.834 - 15.929: 99.8238% ( 1) 00:16:59.566 23.609 - 23.704: 99.8322% ( 1) 00:16:59.566 3980.705 - 4004.978: 99.9916% ( 19) 00:16:59.566 4004.978 - 4029.250: 100.0000% ( 1) 00:16:59.566 00:16:59.566 Complete histogram 00:16:59.566 ================== 00:16:59.566 Range in us Cumulative Count 00:16:59.566 2.441 - 2.453: 0.5791% ( 69) 00:16:59.566 2.453 - 2.465: 25.7301% ( 2997) 00:16:59.566 2.465 - 2.477: 63.8553% ( 4543) 00:16:59.566 2.477 - 2.489: 69.3353% ( 653) 00:16:59.566 2.489 - 2.501: 76.6700% ( 874) 00:16:59.566 2.501 - 2.513: 89.6442% ( 1546) 00:16:59.566 2.513 - 2.524: 93.9745% ( 516) 00:16:59.566 2.524 - 2.536: 96.2236% ( 268) 00:16:59.566 2.536 - 2.548: 97.6586% ( 171) 00:16:59.566 2.548 - 2.5[2024-07-26 14:11:16.084315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:59.566 60: 98.2209% ( 67) 00:16:59.566 2.560 - 2.572: 98.4978% ( 33) 00:16:59.566 2.572 - 2.584: 98.5817% ( 10) 00:16:59.566 2.584 - 2.596: 98.6573% ( 9) 00:16:59.566 2.596 - 2.607: 98.6824% ( 3) 00:16:59.566 2.607 - 2.619: 98.6992% ( 2) 00:16:59.566 2.619 - 2.631: 98.7160% ( 2) 00:16:59.566 2.631 - 2.643: 98.7244% ( 1) 00:16:59.566 2.643 - 2.655: 98.7580% ( 4) 00:16:59.566 2.655 - 2.667: 98.7831% ( 3) 00:16:59.566 2.667 - 2.679: 98.8083% ( 3) 00:16:59.566 2.679 - 2.690: 98.8419% ( 4) 00:16:59.566 2.690 - 2.702: 98.8587% ( 2) 00:16:59.566 2.702 - 2.714: 98.8755% ( 2) 00:16:59.566 2.726 - 2.738: 98.8922% ( 2) 00:16:59.566 2.750 - 2.761: 98.9006% ( 1) 00:16:59.566 2.773 - 2.785: 98.9090% ( 1) 00:16:59.566 2.785 - 2.797: 98.9174% ( 1) 00:16:59.566 3.176 - 3.200: 98.9258% ( 1) 00:16:59.566 3.319 - 3.342: 98.9342% ( 1) 00:16:59.566 3.342 - 3.366: 98.9510% ( 2) 00:16:59.566 3.366 - 3.390: 98.9678% ( 2) 00:16:59.566 3.390 - 3.413: 99.0013% ( 4) 00:16:59.566 3.413 - 3.437: 99.0517% ( 6) 00:16:59.566 3.437 - 3.461: 99.0685% ( 2) 00:16:59.566 3.461 - 3.484: 99.0937% ( 3) 00:16:59.566 3.484 - 3.508: 99.1188% ( 3) 00:16:59.566 3.508 - 3.532: 99.1356% ( 2) 00:16:59.566 3.532 - 3.556: 99.1524% ( 2) 00:16:59.566 3.556 - 3.579: 99.1692% ( 2) 00:16:59.566 3.579 - 3.603: 99.1776% ( 1) 00:16:59.566 3.603 - 3.627: 99.1860% ( 1) 00:16:59.566 3.650 - 3.674: 99.1944% ( 1) 00:16:59.566 3.674 - 3.698: 99.2028% ( 1) 00:16:59.566 3.698 - 3.721: 99.2111% ( 1) 00:16:59.566 5.997 - 6.021: 99.2195% ( 1) 00:16:59.566 6.684 - 6.732: 99.2279% ( 1) 00:16:59.566 6.732 - 6.779: 99.2363% ( 1) 00:16:59.566 6.874 - 6.921: 99.2531% ( 2) 00:16:59.566 6.921 - 6.969: 99.2615% ( 1) 00:16:59.566 7.111 - 7.159: 99.2699% ( 1) 00:16:59.566 7.396 - 7.443: 99.2783% ( 1) 00:16:59.566 7.585 - 7.633: 99.2867% ( 1) 00:16:59.566 7.633 - 7.680: 99.3118% ( 3) 00:16:59.566 7.775 - 7.822: 99.3202% ( 1) 00:16:59.566 8.012 - 8.059: 99.3286% ( 1) 00:16:59.566 8.201 - 8.249: 99.3370% ( 1) 00:16:59.566 8.344 - 8.391: 99.3454% ( 1) 00:16:59.566 8.391 - 8.439: 99.3538% ( 1) 00:16:59.566 8.486 - 8.533: 99.3706% ( 2) 00:16:59.566 8.581 - 8.628: 99.3790% ( 1) 00:16:59.566 9.197 - 9.244: 99.3874% ( 1) 00:16:59.566 9.339 - 9.387: 99.3958% ( 1) 00:16:59.566 9.387 - 9.434: 99.4126% ( 2) 00:16:59.566 1207.561 - 1213.630: 99.4209% ( 1) 00:16:59.566 3009.801 - 3021.938: 99.4293% ( 1) 00:16:59.566 3980.705 - 4004.978: 99.9832% ( 66) 00:16:59.566 4975.881 - 5000.154: 99.9916% ( 1) 00:16:59.566 5995.330 - 6019.603: 100.0000% ( 1) 00:16:59.566 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:59.566 [ 00:16:59.566 { 00:16:59.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:59.566 "subtype": "Discovery", 00:16:59.566 "listen_addresses": [], 00:16:59.566 "allow_any_host": true, 00:16:59.566 "hosts": [] 00:16:59.566 }, 00:16:59.566 { 00:16:59.566 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:59.566 "subtype": "NVMe", 00:16:59.566 "listen_addresses": [ 00:16:59.566 { 00:16:59.566 "trtype": "VFIOUSER", 00:16:59.566 "adrfam": "IPv4", 00:16:59.566 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:59.566 "trsvcid": "0" 00:16:59.566 } 00:16:59.566 ], 00:16:59.566 "allow_any_host": true, 00:16:59.566 "hosts": [], 00:16:59.566 "serial_number": "SPDK1", 00:16:59.566 "model_number": "SPDK bdev Controller", 00:16:59.566 "max_namespaces": 32, 00:16:59.566 "min_cntlid": 1, 00:16:59.566 "max_cntlid": 65519, 00:16:59.566 "namespaces": [ 00:16:59.566 { 00:16:59.566 "nsid": 1, 00:16:59.566 "bdev_name": "Malloc1", 00:16:59.566 "name": "Malloc1", 00:16:59.566 "nguid": "01F89151B43346D794FAC7CF8AA31F11", 00:16:59.566 "uuid": "01f89151-b433-46d7-94fa-c7cf8aa31f11" 00:16:59.566 } 00:16:59.566 ] 00:16:59.566 }, 00:16:59.566 { 00:16:59.566 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:59.566 "subtype": "NVMe", 00:16:59.566 "listen_addresses": [ 00:16:59.566 { 00:16:59.566 "trtype": "VFIOUSER", 00:16:59.566 "adrfam": "IPv4", 00:16:59.566 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:59.566 "trsvcid": "0" 00:16:59.566 } 00:16:59.566 ], 00:16:59.566 "allow_any_host": true, 00:16:59.566 "hosts": [], 00:16:59.566 "serial_number": "SPDK2", 00:16:59.566 "model_number": "SPDK bdev Controller", 00:16:59.566 "max_namespaces": 32, 00:16:59.566 "min_cntlid": 1, 00:16:59.566 "max_cntlid": 65519, 00:16:59.566 "namespaces": [ 00:16:59.566 { 00:16:59.566 "nsid": 1, 00:16:59.566 "bdev_name": "Malloc2", 00:16:59.566 "name": "Malloc2", 00:16:59.566 "nguid": "7D5F264FC0024BE9A69DB075BEC88A6D", 00:16:59.566 "uuid": "7d5f264f-c002-4be9-a69d-b075bec88a6d" 00:16:59.566 } 00:16:59.566 ] 00:16:59.566 } 00:16:59.566 ] 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2506545 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:59.566 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:59.825 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.825 [2024-07-26 14:11:16.593799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:00.113 Malloc3 00:17:00.113 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:00.375 [2024-07-26 14:11:17.226625] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:00.375 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:00.633 Asynchronous Event Request test 00:17:00.633 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:00.633 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:00.633 Registering asynchronous event callbacks... 00:17:00.633 Starting namespace attribute notice tests for all controllers... 00:17:00.633 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:00.633 aer_cb - Changed Namespace 00:17:00.633 Cleaning up... 00:17:00.891 [ 00:17:00.891 { 00:17:00.891 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:00.891 "subtype": "Discovery", 00:17:00.891 "listen_addresses": [], 00:17:00.891 "allow_any_host": true, 00:17:00.891 "hosts": [] 00:17:00.891 }, 00:17:00.891 { 00:17:00.891 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:00.891 "subtype": "NVMe", 00:17:00.891 "listen_addresses": [ 00:17:00.891 { 00:17:00.891 "trtype": "VFIOUSER", 00:17:00.891 "adrfam": "IPv4", 00:17:00.891 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:00.891 "trsvcid": "0" 00:17:00.891 } 00:17:00.891 ], 00:17:00.891 "allow_any_host": true, 00:17:00.891 "hosts": [], 00:17:00.891 "serial_number": "SPDK1", 00:17:00.891 "model_number": "SPDK bdev Controller", 00:17:00.891 "max_namespaces": 32, 00:17:00.891 "min_cntlid": 1, 00:17:00.891 "max_cntlid": 65519, 00:17:00.891 "namespaces": [ 00:17:00.891 { 00:17:00.891 "nsid": 1, 00:17:00.891 "bdev_name": "Malloc1", 00:17:00.891 "name": "Malloc1", 00:17:00.891 "nguid": "01F89151B43346D794FAC7CF8AA31F11", 00:17:00.891 "uuid": "01f89151-b433-46d7-94fa-c7cf8aa31f11" 00:17:00.891 }, 00:17:00.891 { 00:17:00.891 "nsid": 2, 00:17:00.891 "bdev_name": "Malloc3", 00:17:00.891 "name": "Malloc3", 00:17:00.891 "nguid": "3BA16FD414D24246AD32350EDF0AEAB4", 00:17:00.891 "uuid": "3ba16fd4-14d2-4246-ad32-350edf0aeab4" 00:17:00.891 } 00:17:00.891 ] 00:17:00.891 }, 00:17:00.891 { 00:17:00.891 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:00.891 "subtype": "NVMe", 00:17:00.891 "listen_addresses": [ 00:17:00.891 { 00:17:00.891 "trtype": "VFIOUSER", 00:17:00.891 "adrfam": "IPv4", 00:17:00.891 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:00.891 "trsvcid": "0" 00:17:00.891 } 00:17:00.891 ], 00:17:00.891 "allow_any_host": true, 00:17:00.891 "hosts": [], 00:17:00.891 "serial_number": "SPDK2", 00:17:00.891 "model_number": "SPDK bdev Controller", 00:17:00.891 "max_namespaces": 32, 00:17:00.891 "min_cntlid": 1, 00:17:00.891 "max_cntlid": 65519, 00:17:00.891 "namespaces": [ 00:17:00.891 { 00:17:00.891 "nsid": 1, 00:17:00.891 "bdev_name": "Malloc2", 00:17:00.891 "name": "Malloc2", 00:17:00.891 "nguid": "7D5F264FC0024BE9A69DB075BEC88A6D", 00:17:00.891 "uuid": "7d5f264f-c002-4be9-a69d-b075bec88a6d" 00:17:00.891 } 00:17:00.891 ] 00:17:00.891 } 00:17:00.891 ] 00:17:00.891 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2506545 00:17:00.891 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:00.891 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:00.892 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:00.892 14:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:00.892 [2024-07-26 14:11:17.773501] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:17:00.892 [2024-07-26 14:11:17.773594] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506799 ] 00:17:01.151 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.151 [2024-07-26 14:11:17.824951] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:01.151 [2024-07-26 14:11:17.833726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:01.151 [2024-07-26 14:11:17.833760] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe2ca2ba000 00:17:01.151 [2024-07-26 14:11:17.834727] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:01.151 [2024-07-26 14:11:17.835733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:01.151 [2024-07-26 14:11:17.836735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:01.151 [2024-07-26 14:11:17.837747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:01.151 [2024-07-26 14:11:17.838751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:01.151 [2024-07-26 14:11:17.839762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:01.151 [2024-07-26 14:11:17.840776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:01.151 [2024-07-26 14:11:17.841789] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:01.151 [2024-07-26 14:11:17.842798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:01.151 [2024-07-26 14:11:17.842822] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe2ca2af000 00:17:01.151 [2024-07-26 14:11:17.844094] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:01.151 [2024-07-26 14:11:17.864107] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:01.151 [2024-07-26 14:11:17.864147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:01.151 [2024-07-26 14:11:17.866248] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:01.151 [2024-07-26 14:11:17.866313] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:01.151 [2024-07-26 14:11:17.866420] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:01.151 [2024-07-26 14:11:17.866460] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:01.151 [2024-07-26 14:11:17.866474] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:01.151 [2024-07-26 14:11:17.867256] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:01.151 [2024-07-26 14:11:17.867286] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:01.151 [2024-07-26 14:11:17.867302] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:01.151 [2024-07-26 14:11:17.868260] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:01.151 [2024-07-26 14:11:17.868284] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:01.152 [2024-07-26 14:11:17.868299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:01.152 [2024-07-26 14:11:17.869263] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:01.152 [2024-07-26 14:11:17.869287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:01.152 [2024-07-26 14:11:17.870266] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:01.152 [2024-07-26 14:11:17.870289] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:01.152 [2024-07-26 14:11:17.870305] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:01.152 [2024-07-26 14:11:17.870319] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:01.152 [2024-07-26 14:11:17.870435] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:01.152 [2024-07-26 14:11:17.870446] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:01.152 [2024-07-26 14:11:17.870456] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:01.152 [2024-07-26 14:11:17.871276] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:01.152 [2024-07-26 14:11:17.872288] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:01.152 [2024-07-26 14:11:17.873295] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:01.152 [2024-07-26 14:11:17.874287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:01.152 [2024-07-26 14:11:17.874362] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:01.152 [2024-07-26 14:11:17.875310] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:01.152 [2024-07-26 14:11:17.875333] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:01.152 [2024-07-26 14:11:17.875344] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.875371] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:01.152 [2024-07-26 14:11:17.875386] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.875415] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:01.152 [2024-07-26 14:11:17.875426] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:01.152 [2024-07-26 14:11:17.875446] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:01.152 [2024-07-26 14:11:17.875469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:01.152 [2024-07-26 14:11:17.883450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:01.152 [2024-07-26 14:11:17.883479] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:01.152 [2024-07-26 14:11:17.883490] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:01.152 [2024-07-26 14:11:17.883498] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:01.152 [2024-07-26 14:11:17.883507] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:01.152 [2024-07-26 14:11:17.883517] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:01.152 [2024-07-26 14:11:17.883531] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:01.152 [2024-07-26 14:11:17.883541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.883555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.883577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:01.152 [2024-07-26 14:11:17.891444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:01.152 [2024-07-26 14:11:17.891486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.152 [2024-07-26 14:11:17.891503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.152 [2024-07-26 14:11:17.891517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.152 [2024-07-26 14:11:17.891530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.152 [2024-07-26 14:11:17.891540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.891557] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.891574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:01.152 [2024-07-26 14:11:17.899442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:01.152 [2024-07-26 14:11:17.899463] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:01.152 [2024-07-26 14:11:17.899480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.899499] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.899512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.899527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:01.152 [2024-07-26 14:11:17.907440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:01.152 [2024-07-26 14:11:17.907525] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.907546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.907561] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:01.152 [2024-07-26 14:11:17.907571] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:01.152 [2024-07-26 14:11:17.907578] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:01.152 [2024-07-26 14:11:17.907589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:01.152 [2024-07-26 14:11:17.915439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:01.152 [2024-07-26 14:11:17.915467] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:01.152 [2024-07-26 14:11:17.915491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.915510] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.915525] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:01.152 [2024-07-26 14:11:17.915535] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:01.152 [2024-07-26 14:11:17.915542] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:01.152 [2024-07-26 14:11:17.915554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:01.152 [2024-07-26 14:11:17.923443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:01.152 [2024-07-26 14:11:17.923477] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.923496] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:01.152 [2024-07-26 14:11:17.923511] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:01.152 [2024-07-26 14:11:17.923521] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:01.152 [2024-07-26 14:11:17.923528] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:01.153 [2024-07-26 14:11:17.923539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:01.153 [2024-07-26 14:11:17.931446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:01.153 [2024-07-26 14:11:17.931470] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:01.153 [2024-07-26 14:11:17.931485] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:01.153 [2024-07-26 14:11:17.931504] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:01.153 [2024-07-26 14:11:17.931520] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:01.153 [2024-07-26 14:11:17.931531] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:01.153 [2024-07-26 14:11:17.931541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:01.153 [2024-07-26 14:11:17.931551] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:01.153 [2024-07-26 14:11:17.931559] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:01.153 [2024-07-26 14:11:17.931569] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:01.153 [2024-07-26 14:11:17.931599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:01.153 [2024-07-26 14:11:17.939444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:01.153 [2024-07-26 14:11:17.939474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:01.153 [2024-07-26 14:11:17.947444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:01.153 [2024-07-26 14:11:17.947472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:01.153 [2024-07-26 14:11:17.955440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:01.153 [2024-07-26 14:11:17.955469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:01.153 [2024-07-26 14:11:17.963443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:01.153 [2024-07-26 14:11:17.963479] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:01.153 [2024-07-26 14:11:17.963492] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:01.153 [2024-07-26 14:11:17.963500] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:01.153 [2024-07-26 14:11:17.963506] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:01.153 [2024-07-26 14:11:17.963513] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:01.153 [2024-07-26 14:11:17.963524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:01.153 [2024-07-26 14:11:17.963538] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:01.153 [2024-07-26 14:11:17.963547] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:01.153 [2024-07-26 14:11:17.963554] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:01.153 [2024-07-26 14:11:17.963564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:01.153 [2024-07-26 14:11:17.963577] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:01.153 [2024-07-26 14:11:17.963586] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:01.153 [2024-07-26 14:11:17.963593] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:01.153 [2024-07-26 14:11:17.963602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:01.153 [2024-07-26 14:11:17.963617] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:01.153 [2024-07-26 14:11:17.963625] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:01.153 [2024-07-26 14:11:17.963632] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:01.153 [2024-07-26 14:11:17.963642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:01.153 [2024-07-26 14:11:17.971440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:01.153 [2024-07-26 14:11:17.971471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:01.153 [2024-07-26 14:11:17.971492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:01.153 [2024-07-26 14:11:17.971509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:01.153 ===================================================== 00:17:01.153 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:01.153 ===================================================== 00:17:01.153 Controller Capabilities/Features 00:17:01.153 ================================ 00:17:01.153 Vendor ID: 4e58 00:17:01.153 Subsystem Vendor ID: 4e58 00:17:01.153 Serial Number: SPDK2 00:17:01.153 Model Number: SPDK bdev Controller 00:17:01.153 Firmware Version: 24.09 00:17:01.153 Recommended Arb Burst: 6 00:17:01.153 IEEE OUI Identifier: 8d 6b 50 00:17:01.153 Multi-path I/O 00:17:01.153 May have multiple subsystem ports: Yes 00:17:01.153 May have multiple controllers: Yes 00:17:01.153 Associated with SR-IOV VF: No 00:17:01.153 Max Data Transfer Size: 131072 00:17:01.153 Max Number of Namespaces: 32 00:17:01.153 Max Number of I/O Queues: 127 00:17:01.153 NVMe Specification Version (VS): 1.3 00:17:01.153 NVMe Specification Version (Identify): 1.3 00:17:01.153 Maximum Queue Entries: 256 00:17:01.153 Contiguous Queues Required: Yes 00:17:01.153 Arbitration Mechanisms Supported 00:17:01.153 Weighted Round Robin: Not Supported 00:17:01.153 Vendor Specific: Not Supported 00:17:01.153 Reset Timeout: 15000 ms 00:17:01.153 Doorbell Stride: 4 bytes 00:17:01.153 NVM Subsystem Reset: Not Supported 00:17:01.153 Command Sets Supported 00:17:01.153 NVM Command Set: Supported 00:17:01.153 Boot Partition: Not Supported 00:17:01.153 Memory Page Size Minimum: 4096 bytes 00:17:01.153 Memory Page Size Maximum: 4096 bytes 00:17:01.153 Persistent Memory Region: Not Supported 00:17:01.153 Optional Asynchronous Events Supported 00:17:01.153 Namespace Attribute Notices: Supported 00:17:01.153 Firmware Activation Notices: Not Supported 00:17:01.153 ANA Change Notices: Not Supported 00:17:01.153 PLE Aggregate Log Change Notices: Not Supported 00:17:01.153 LBA Status Info Alert Notices: Not Supported 00:17:01.153 EGE Aggregate Log Change Notices: Not Supported 00:17:01.153 Normal NVM Subsystem Shutdown event: Not Supported 00:17:01.153 Zone Descriptor Change Notices: Not Supported 00:17:01.153 Discovery Log Change Notices: Not Supported 00:17:01.153 Controller Attributes 00:17:01.153 128-bit Host Identifier: Supported 00:17:01.153 Non-Operational Permissive Mode: Not Supported 00:17:01.153 NVM Sets: Not Supported 00:17:01.153 Read Recovery Levels: Not Supported 00:17:01.153 Endurance Groups: Not Supported 00:17:01.153 Predictable Latency Mode: Not Supported 00:17:01.153 Traffic Based Keep ALive: Not Supported 00:17:01.153 Namespace Granularity: Not Supported 00:17:01.153 SQ Associations: Not Supported 00:17:01.153 UUID List: Not Supported 00:17:01.153 Multi-Domain Subsystem: Not Supported 00:17:01.153 Fixed Capacity Management: Not Supported 00:17:01.153 Variable Capacity Management: Not Supported 00:17:01.153 Delete Endurance Group: Not Supported 00:17:01.153 Delete NVM Set: Not Supported 00:17:01.153 Extended LBA Formats Supported: Not Supported 00:17:01.153 Flexible Data Placement Supported: Not Supported 00:17:01.153 00:17:01.153 Controller Memory Buffer Support 00:17:01.153 ================================ 00:17:01.153 Supported: No 00:17:01.153 00:17:01.153 Persistent Memory Region Support 00:17:01.153 ================================ 00:17:01.153 Supported: No 00:17:01.153 00:17:01.153 Admin Command Set Attributes 00:17:01.153 ============================ 00:17:01.153 Security Send/Receive: Not Supported 00:17:01.153 Format NVM: Not Supported 00:17:01.153 Firmware Activate/Download: Not Supported 00:17:01.153 Namespace Management: Not Supported 00:17:01.154 Device Self-Test: Not Supported 00:17:01.154 Directives: Not Supported 00:17:01.154 NVMe-MI: Not Supported 00:17:01.154 Virtualization Management: Not Supported 00:17:01.154 Doorbell Buffer Config: Not Supported 00:17:01.154 Get LBA Status Capability: Not Supported 00:17:01.154 Command & Feature Lockdown Capability: Not Supported 00:17:01.154 Abort Command Limit: 4 00:17:01.154 Async Event Request Limit: 4 00:17:01.154 Number of Firmware Slots: N/A 00:17:01.154 Firmware Slot 1 Read-Only: N/A 00:17:01.154 Firmware Activation Without Reset: N/A 00:17:01.154 Multiple Update Detection Support: N/A 00:17:01.154 Firmware Update Granularity: No Information Provided 00:17:01.154 Per-Namespace SMART Log: No 00:17:01.154 Asymmetric Namespace Access Log Page: Not Supported 00:17:01.154 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:01.154 Command Effects Log Page: Supported 00:17:01.154 Get Log Page Extended Data: Supported 00:17:01.154 Telemetry Log Pages: Not Supported 00:17:01.154 Persistent Event Log Pages: Not Supported 00:17:01.154 Supported Log Pages Log Page: May Support 00:17:01.154 Commands Supported & Effects Log Page: Not Supported 00:17:01.154 Feature Identifiers & Effects Log Page:May Support 00:17:01.154 NVMe-MI Commands & Effects Log Page: May Support 00:17:01.154 Data Area 4 for Telemetry Log: Not Supported 00:17:01.154 Error Log Page Entries Supported: 128 00:17:01.154 Keep Alive: Supported 00:17:01.154 Keep Alive Granularity: 10000 ms 00:17:01.154 00:17:01.154 NVM Command Set Attributes 00:17:01.154 ========================== 00:17:01.154 Submission Queue Entry Size 00:17:01.154 Max: 64 00:17:01.154 Min: 64 00:17:01.154 Completion Queue Entry Size 00:17:01.154 Max: 16 00:17:01.154 Min: 16 00:17:01.154 Number of Namespaces: 32 00:17:01.154 Compare Command: Supported 00:17:01.154 Write Uncorrectable Command: Not Supported 00:17:01.154 Dataset Management Command: Supported 00:17:01.154 Write Zeroes Command: Supported 00:17:01.154 Set Features Save Field: Not Supported 00:17:01.154 Reservations: Not Supported 00:17:01.154 Timestamp: Not Supported 00:17:01.154 Copy: Supported 00:17:01.154 Volatile Write Cache: Present 00:17:01.154 Atomic Write Unit (Normal): 1 00:17:01.154 Atomic Write Unit (PFail): 1 00:17:01.154 Atomic Compare & Write Unit: 1 00:17:01.154 Fused Compare & Write: Supported 00:17:01.154 Scatter-Gather List 00:17:01.154 SGL Command Set: Supported (Dword aligned) 00:17:01.154 SGL Keyed: Not Supported 00:17:01.154 SGL Bit Bucket Descriptor: Not Supported 00:17:01.154 SGL Metadata Pointer: Not Supported 00:17:01.154 Oversized SGL: Not Supported 00:17:01.154 SGL Metadata Address: Not Supported 00:17:01.154 SGL Offset: Not Supported 00:17:01.154 Transport SGL Data Block: Not Supported 00:17:01.154 Replay Protected Memory Block: Not Supported 00:17:01.154 00:17:01.154 Firmware Slot Information 00:17:01.154 ========================= 00:17:01.154 Active slot: 1 00:17:01.154 Slot 1 Firmware Revision: 24.09 00:17:01.154 00:17:01.154 00:17:01.154 Commands Supported and Effects 00:17:01.154 ============================== 00:17:01.154 Admin Commands 00:17:01.154 -------------- 00:17:01.154 Get Log Page (02h): Supported 00:17:01.154 Identify (06h): Supported 00:17:01.154 Abort (08h): Supported 00:17:01.154 Set Features (09h): Supported 00:17:01.154 Get Features (0Ah): Supported 00:17:01.154 Asynchronous Event Request (0Ch): Supported 00:17:01.154 Keep Alive (18h): Supported 00:17:01.154 I/O Commands 00:17:01.154 ------------ 00:17:01.154 Flush (00h): Supported LBA-Change 00:17:01.154 Write (01h): Supported LBA-Change 00:17:01.154 Read (02h): Supported 00:17:01.154 Compare (05h): Supported 00:17:01.154 Write Zeroes (08h): Supported LBA-Change 00:17:01.154 Dataset Management (09h): Supported LBA-Change 00:17:01.154 Copy (19h): Supported LBA-Change 00:17:01.154 00:17:01.154 Error Log 00:17:01.154 ========= 00:17:01.154 00:17:01.154 Arbitration 00:17:01.154 =========== 00:17:01.154 Arbitration Burst: 1 00:17:01.154 00:17:01.154 Power Management 00:17:01.154 ================ 00:17:01.154 Number of Power States: 1 00:17:01.154 Current Power State: Power State #0 00:17:01.154 Power State #0: 00:17:01.154 Max Power: 0.00 W 00:17:01.154 Non-Operational State: Operational 00:17:01.154 Entry Latency: Not Reported 00:17:01.154 Exit Latency: Not Reported 00:17:01.154 Relative Read Throughput: 0 00:17:01.154 Relative Read Latency: 0 00:17:01.154 Relative Write Throughput: 0 00:17:01.154 Relative Write Latency: 0 00:17:01.154 Idle Power: Not Reported 00:17:01.154 Active Power: Not Reported 00:17:01.154 Non-Operational Permissive Mode: Not Supported 00:17:01.154 00:17:01.154 Health Information 00:17:01.154 ================== 00:17:01.154 Critical Warnings: 00:17:01.154 Available Spare Space: OK 00:17:01.154 Temperature: OK 00:17:01.154 Device Reliability: OK 00:17:01.154 Read Only: No 00:17:01.154 Volatile Memory Backup: OK 00:17:01.154 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:01.154 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:01.154 Available Spare: 0% 00:17:01.154 Available Sp[2024-07-26 14:11:17.971655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:01.154 [2024-07-26 14:11:17.979442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:01.154 [2024-07-26 14:11:17.979502] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:01.154 [2024-07-26 14:11:17.979522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.154 [2024-07-26 14:11:17.979535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.154 [2024-07-26 14:11:17.979545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.154 [2024-07-26 14:11:17.979556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.154 [2024-07-26 14:11:17.979647] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:01.154 [2024-07-26 14:11:17.979672] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:01.154 [2024-07-26 14:11:17.980654] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:01.154 [2024-07-26 14:11:17.980734] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:01.154 [2024-07-26 14:11:17.980752] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:01.154 [2024-07-26 14:11:17.981659] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:01.154 [2024-07-26 14:11:17.981687] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:01.154 [2024-07-26 14:11:17.981747] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:01.154 [2024-07-26 14:11:17.983093] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:01.154 are Threshold: 0% 00:17:01.154 Life Percentage Used: 0% 00:17:01.154 Data Units Read: 0 00:17:01.154 Data Units Written: 0 00:17:01.154 Host Read Commands: 0 00:17:01.154 Host Write Commands: 0 00:17:01.154 Controller Busy Time: 0 minutes 00:17:01.154 Power Cycles: 0 00:17:01.154 Power On Hours: 0 hours 00:17:01.154 Unsafe Shutdowns: 0 00:17:01.154 Unrecoverable Media Errors: 0 00:17:01.154 Lifetime Error Log Entries: 0 00:17:01.154 Warning Temperature Time: 0 minutes 00:17:01.154 Critical Temperature Time: 0 minutes 00:17:01.154 00:17:01.154 Number of Queues 00:17:01.154 ================ 00:17:01.154 Number of I/O Submission Queues: 127 00:17:01.154 Number of I/O Completion Queues: 127 00:17:01.154 00:17:01.154 Active Namespaces 00:17:01.154 ================= 00:17:01.154 Namespace ID:1 00:17:01.154 Error Recovery Timeout: Unlimited 00:17:01.154 Command Set Identifier: NVM (00h) 00:17:01.155 Deallocate: Supported 00:17:01.155 Deallocated/Unwritten Error: Not Supported 00:17:01.155 Deallocated Read Value: Unknown 00:17:01.155 Deallocate in Write Zeroes: Not Supported 00:17:01.155 Deallocated Guard Field: 0xFFFF 00:17:01.155 Flush: Supported 00:17:01.155 Reservation: Supported 00:17:01.155 Namespace Sharing Capabilities: Multiple Controllers 00:17:01.155 Size (in LBAs): 131072 (0GiB) 00:17:01.155 Capacity (in LBAs): 131072 (0GiB) 00:17:01.155 Utilization (in LBAs): 131072 (0GiB) 00:17:01.155 NGUID: 7D5F264FC0024BE9A69DB075BEC88A6D 00:17:01.155 UUID: 7d5f264f-c002-4be9-a69d-b075bec88a6d 00:17:01.155 Thin Provisioning: Not Supported 00:17:01.155 Per-NS Atomic Units: Yes 00:17:01.155 Atomic Boundary Size (Normal): 0 00:17:01.155 Atomic Boundary Size (PFail): 0 00:17:01.155 Atomic Boundary Offset: 0 00:17:01.155 Maximum Single Source Range Length: 65535 00:17:01.155 Maximum Copy Length: 65535 00:17:01.155 Maximum Source Range Count: 1 00:17:01.155 NGUID/EUI64 Never Reused: No 00:17:01.155 Namespace Write Protected: No 00:17:01.155 Number of LBA Formats: 1 00:17:01.155 Current LBA Format: LBA Format #00 00:17:01.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:01.155 00:17:01.155 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:01.413 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.413 [2024-07-26 14:11:18.281717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:06.675 Initializing NVMe Controllers 00:17:06.675 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:06.675 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:06.675 Initialization complete. Launching workers. 00:17:06.675 ======================================================== 00:17:06.675 Latency(us) 00:17:06.675 Device Information : IOPS MiB/s Average min max 00:17:06.675 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 27419.61 107.11 4667.42 1343.17 7602.65 00:17:06.675 ======================================================== 00:17:06.675 Total : 27419.61 107.11 4667.42 1343.17 7602.65 00:17:06.676 00:17:06.676 [2024-07-26 14:11:23.385803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:06.676 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:06.676 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.934 [2024-07-26 14:11:23.646608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:12.198 Initializing NVMe Controllers 00:17:12.198 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:12.198 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:12.198 Initialization complete. Launching workers. 00:17:12.198 ======================================================== 00:17:12.198 Latency(us) 00:17:12.198 Device Information : IOPS MiB/s Average min max 00:17:12.198 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 25849.76 100.98 4950.80 1376.64 8515.55 00:17:12.198 ======================================================== 00:17:12.198 Total : 25849.76 100.98 4950.80 1376.64 8515.55 00:17:12.198 00:17:12.198 [2024-07-26 14:11:28.669041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:12.198 14:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:12.198 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.198 [2024-07-26 14:11:28.954343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:17.481 [2024-07-26 14:11:34.095564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:17.481 Initializing NVMe Controllers 00:17:17.481 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:17.481 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:17.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:17.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:17.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:17.481 Initialization complete. Launching workers. 00:17:17.481 Starting thread on core 2 00:17:17.481 Starting thread on core 3 00:17:17.481 Starting thread on core 1 00:17:17.481 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:17.481 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.739 [2024-07-26 14:11:34.512915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:21.025 [2024-07-26 14:11:37.574313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:21.025 Initializing NVMe Controllers 00:17:21.025 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:21.025 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:21.025 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:21.025 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:21.025 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:21.025 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:21.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:21.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:21.025 Initialization complete. Launching workers. 00:17:21.025 Starting thread on core 1 with urgent priority queue 00:17:21.025 Starting thread on core 2 with urgent priority queue 00:17:21.025 Starting thread on core 3 with urgent priority queue 00:17:21.025 Starting thread on core 0 with urgent priority queue 00:17:21.025 SPDK bdev Controller (SPDK2 ) core 0: 4242.00 IO/s 23.57 secs/100000 ios 00:17:21.025 SPDK bdev Controller (SPDK2 ) core 1: 4846.33 IO/s 20.63 secs/100000 ios 00:17:21.025 SPDK bdev Controller (SPDK2 ) core 2: 4285.00 IO/s 23.34 secs/100000 ios 00:17:21.025 SPDK bdev Controller (SPDK2 ) core 3: 4757.00 IO/s 21.02 secs/100000 ios 00:17:21.025 ======================================================== 00:17:21.025 00:17:21.025 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:21.025 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.025 [2024-07-26 14:11:37.899178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:21.025 Initializing NVMe Controllers 00:17:21.025 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:21.025 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:21.025 Namespace ID: 1 size: 0GB 00:17:21.025 Initialization complete. 00:17:21.025 INFO: using host memory buffer for IO 00:17:21.025 Hello world! 00:17:21.025 [2024-07-26 14:11:37.908399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:21.283 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:21.283 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.541 [2024-07-26 14:11:38.225826] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:22.475 Initializing NVMe Controllers 00:17:22.475 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:22.475 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:22.476 Initialization complete. Launching workers. 00:17:22.476 submit (in ns) avg, min, max = 10257.0, 4186.7, 4005293.3 00:17:22.476 complete (in ns) avg, min, max = 28993.1, 2445.9, 4004165.9 00:17:22.476 00:17:22.476 Submit histogram 00:17:22.476 ================ 00:17:22.476 Range in us Cumulative Count 00:17:22.476 4.172 - 4.196: 0.0333% ( 4) 00:17:22.476 4.196 - 4.219: 0.2418% ( 25) 00:17:22.476 4.219 - 4.243: 1.1588% ( 110) 00:17:22.476 4.243 - 4.267: 3.5848% ( 291) 00:17:22.476 4.267 - 4.290: 7.5531% ( 476) 00:17:22.476 4.290 - 4.314: 14.2976% ( 809) 00:17:22.476 4.314 - 4.338: 22.2093% ( 949) 00:17:22.476 4.338 - 4.361: 30.2209% ( 961) 00:17:22.476 4.361 - 4.385: 37.5240% ( 876) 00:17:22.476 4.385 - 4.409: 42.9762% ( 654) 00:17:22.476 4.409 - 4.433: 47.3197% ( 521) 00:17:22.476 4.433 - 4.456: 49.7957% ( 297) 00:17:22.476 4.456 - 4.480: 51.5131% ( 206) 00:17:22.476 4.480 - 4.504: 54.2143% ( 324) 00:17:22.476 4.504 - 4.527: 57.7824% ( 428) 00:17:22.476 4.527 - 4.551: 62.3593% ( 549) 00:17:22.476 4.551 - 4.575: 66.9779% ( 554) 00:17:22.476 4.575 - 4.599: 70.5044% ( 423) 00:17:22.476 4.599 - 4.622: 72.9554% ( 294) 00:17:22.476 4.622 - 4.646: 74.1809% ( 147) 00:17:22.476 4.646 - 4.670: 75.1647% ( 118) 00:17:22.476 4.670 - 4.693: 75.5398% ( 45) 00:17:22.476 4.693 - 4.717: 75.9316% ( 47) 00:17:22.476 4.717 - 4.741: 76.1901% ( 31) 00:17:22.476 4.741 - 4.764: 76.5486% ( 43) 00:17:22.476 4.764 - 4.788: 76.8153% ( 32) 00:17:22.476 4.788 - 4.812: 76.9154% ( 12) 00:17:22.476 4.812 - 4.836: 76.9571% ( 5) 00:17:22.476 4.836 - 4.859: 77.0154% ( 7) 00:17:22.476 4.859 - 4.883: 77.0738% ( 7) 00:17:22.476 4.883 - 4.907: 77.9575% ( 106) 00:17:22.476 4.907 - 4.930: 81.3339% ( 405) 00:17:22.476 4.930 - 4.954: 87.3364% ( 720) 00:17:22.476 4.954 - 4.978: 95.0396% ( 924) 00:17:22.476 4.978 - 5.001: 96.8904% ( 222) 00:17:22.476 5.001 - 5.025: 97.1655% ( 33) 00:17:22.476 5.025 - 5.049: 97.2655% ( 12) 00:17:22.476 5.049 - 5.073: 97.3322% ( 8) 00:17:22.476 5.073 - 5.096: 97.3906% ( 7) 00:17:22.476 5.096 - 5.120: 97.4323% ( 5) 00:17:22.476 5.120 - 5.144: 97.5073% ( 9) 00:17:22.476 5.144 - 5.167: 97.5990% ( 11) 00:17:22.476 5.167 - 5.191: 97.7324% ( 16) 00:17:22.476 5.191 - 5.215: 97.9158% ( 22) 00:17:22.476 5.215 - 5.239: 98.0158% ( 12) 00:17:22.476 5.239 - 5.262: 98.1326% ( 14) 00:17:22.476 5.262 - 5.286: 98.1992% ( 8) 00:17:22.476 5.286 - 5.310: 98.2493% ( 6) 00:17:22.476 5.310 - 5.333: 98.3076% ( 7) 00:17:22.476 5.333 - 5.357: 98.3493% ( 5) 00:17:22.476 5.357 - 5.381: 98.4410% ( 11) 00:17:22.476 5.381 - 5.404: 98.5077% ( 8) 00:17:22.476 5.404 - 5.428: 98.5494% ( 5) 00:17:22.476 5.428 - 5.452: 98.5661% ( 2) 00:17:22.476 5.452 - 5.476: 98.6161% ( 6) 00:17:22.476 5.476 - 5.499: 98.6661% ( 6) 00:17:22.476 5.499 - 5.523: 98.7328% ( 8) 00:17:22.476 5.523 - 5.547: 98.8162% ( 10) 00:17:22.476 5.547 - 5.570: 98.8328% ( 2) 00:17:22.476 5.570 - 5.594: 98.8495% ( 2) 00:17:22.476 5.594 - 5.618: 98.8829% ( 4) 00:17:22.476 5.618 - 5.641: 98.8995% ( 2) 00:17:22.476 5.641 - 5.665: 98.9329% ( 4) 00:17:22.476 5.665 - 5.689: 98.9579% ( 3) 00:17:22.476 5.713 - 5.736: 98.9662% ( 1) 00:17:22.476 5.736 - 5.760: 98.9746% ( 1) 00:17:22.476 5.784 - 5.807: 98.9829% ( 1) 00:17:22.476 5.807 - 5.831: 98.9912% ( 1) 00:17:22.476 5.831 - 5.855: 98.9996% ( 1) 00:17:22.476 5.879 - 5.902: 99.0079% ( 1) 00:17:22.476 5.950 - 5.973: 99.0413% ( 4) 00:17:22.476 5.973 - 5.997: 99.0496% ( 1) 00:17:22.476 5.997 - 6.021: 99.0579% ( 1) 00:17:22.476 6.021 - 6.044: 99.0663% ( 1) 00:17:22.476 6.044 - 6.068: 99.0746% ( 1) 00:17:22.476 6.068 - 6.116: 99.0913% ( 2) 00:17:22.476 6.163 - 6.210: 99.0996% ( 1) 00:17:22.476 6.210 - 6.258: 99.1080% ( 1) 00:17:22.476 6.258 - 6.305: 99.1163% ( 1) 00:17:22.476 6.353 - 6.400: 99.1330% ( 2) 00:17:22.476 6.400 - 6.447: 99.1413% ( 1) 00:17:22.476 6.447 - 6.495: 99.1496% ( 1) 00:17:22.476 6.827 - 6.874: 99.1580% ( 1) 00:17:22.476 6.874 - 6.921: 99.1663% ( 1) 00:17:22.476 7.064 - 7.111: 99.1747% ( 1) 00:17:22.476 7.111 - 7.159: 99.1830% ( 1) 00:17:22.476 7.301 - 7.348: 99.1913% ( 1) 00:17:22.476 7.538 - 7.585: 99.1997% ( 1) 00:17:22.476 7.633 - 7.680: 99.2163% ( 2) 00:17:22.476 7.727 - 7.775: 99.2247% ( 1) 00:17:22.476 7.775 - 7.822: 99.2330% ( 1) 00:17:22.476 7.822 - 7.870: 99.2414% ( 1) 00:17:22.476 8.107 - 8.154: 99.2497% ( 1) 00:17:22.476 8.154 - 8.201: 99.2830% ( 4) 00:17:22.476 8.201 - 8.249: 99.2997% ( 2) 00:17:22.476 8.249 - 8.296: 99.3080% ( 1) 00:17:22.476 8.344 - 8.391: 99.3164% ( 1) 00:17:22.476 8.439 - 8.486: 99.3247% ( 1) 00:17:22.476 8.533 - 8.581: 99.3331% ( 1) 00:17:22.476 8.723 - 8.770: 99.3581% ( 3) 00:17:22.476 8.865 - 8.913: 99.3664% ( 1) 00:17:22.476 9.055 - 9.102: 99.3747% ( 1) 00:17:22.476 9.150 - 9.197: 99.3831% ( 1) 00:17:22.476 9.244 - 9.292: 99.4164% ( 4) 00:17:22.476 9.339 - 9.387: 99.4248% ( 1) 00:17:22.476 9.434 - 9.481: 99.4331% ( 1) 00:17:22.476 9.576 - 9.624: 99.4498% ( 2) 00:17:22.476 9.624 - 9.671: 99.4581% ( 1) 00:17:22.476 9.671 - 9.719: 99.4664% ( 1) 00:17:22.476 9.719 - 9.766: 99.4748% ( 1) 00:17:22.476 9.813 - 9.861: 99.4831% ( 1) 00:17:22.476 9.908 - 9.956: 99.4915% ( 1) 00:17:22.476 9.956 - 10.003: 99.4998% ( 1) 00:17:22.476 10.003 - 10.050: 99.5081% ( 1) 00:17:22.476 10.050 - 10.098: 99.5165% ( 1) 00:17:22.476 10.098 - 10.145: 99.5248% ( 1) 00:17:22.476 10.335 - 10.382: 99.5331% ( 1) 00:17:22.476 10.430 - 10.477: 99.5415% ( 1) 00:17:22.476 10.477 - 10.524: 99.5498% ( 1) 00:17:22.476 10.524 - 10.572: 99.5665% ( 2) 00:17:22.476 10.619 - 10.667: 99.5748% ( 1) 00:17:22.476 10.714 - 10.761: 99.5832% ( 1) 00:17:22.476 10.761 - 10.809: 99.5915% ( 1) 00:17:22.476 10.809 - 10.856: 99.5998% ( 1) 00:17:22.476 10.999 - 11.046: 99.6082% ( 1) 00:17:22.476 11.283 - 11.330: 99.6248% ( 2) 00:17:22.476 11.330 - 11.378: 99.6415% ( 2) 00:17:22.476 11.378 - 11.425: 99.6499% ( 1) 00:17:22.476 11.520 - 11.567: 99.6582% ( 1) 00:17:22.476 11.662 - 11.710: 99.6749% ( 2) 00:17:22.476 11.804 - 11.852: 99.6832% ( 1) 00:17:22.476 11.852 - 11.899: 99.6915% ( 1) 00:17:22.476 12.231 - 12.326: 99.6999% ( 1) 00:17:22.476 12.326 - 12.421: 99.7082% ( 1) 00:17:22.476 12.705 - 12.800: 99.7165% ( 1) 00:17:22.476 12.990 - 13.084: 99.7332% ( 2) 00:17:22.476 13.084 - 13.179: 99.7416% ( 1) 00:17:22.476 13.274 - 13.369: 99.7499% ( 1) 00:17:22.476 13.559 - 13.653: 99.7582% ( 1) 00:17:22.476 13.748 - 13.843: 99.7749% ( 2) 00:17:22.476 14.033 - 14.127: 99.7832% ( 1) 00:17:22.476 14.127 - 14.222: 99.7916% ( 1) 00:17:22.476 14.222 - 14.317: 99.8083% ( 2) 00:17:22.476 14.507 - 14.601: 99.8166% ( 1) 00:17:22.477 14.601 - 14.696: 99.8249% ( 1) 00:17:22.477 14.981 - 15.076: 99.8333% ( 1) 00:17:22.477 15.170 - 15.265: 99.8416% ( 1) 00:17:22.477 15.455 - 15.550: 99.8499% ( 1) 00:17:22.477 15.834 - 15.929: 99.8583% ( 1) 00:17:22.477 3980.705 - 4004.978: 99.9917% ( 16) 00:17:22.477 4004.978 - 4029.250: 100.0000% ( 1) 00:17:22.477 00:17:22.477 Complete histogram 00:17:22.477 ================== 00:17:22.477 Range in us Cumulative Count 00:17:22.477 2.441 - 2.453: 0.4585% ( 55) 00:17:22.477 2.453 - 2.465: 22.3426% ( 2625) 00:17:22.477 2.465 - 2.477: 48.1201% ( 3092) 00:17:22.477 2.477 - 2.489: 50.7295% ( 313) 00:17:22.477 2.489 - 2.501: 64.1434% ( 1609) 00:17:22.477 2.501 - 2.513: 88.5952% ( 2933) 00:17:22.477 2.513 - 2.524: 93.8808% ( 634) 00:17:22.477 2.524 - 2.536: 96.0484% ( 260) 00:17:22.477 2.536 - 2.548: 97.5740% ( 183) 00:17:22.477 2.548 - 2.560: 98.1659% ( 71) 00:17:22.477 2.560 - 2.572: 98.3993% ( 28) 00:17:22.477 2.572 - 2.584: 98.4910% ( 11) 00:17:22.477 2.584 - 2.596: 98.5827% ( 11) 00:17:22.477 2.596 - 2.6[2024-07-26 14:11:39.329542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:22.734 07: 98.6494% ( 8) 00:17:22.734 2.607 - 2.619: 98.6578% ( 1) 00:17:22.734 2.619 - 2.631: 98.6744% ( 2) 00:17:22.734 2.631 - 2.643: 98.6995% ( 3) 00:17:22.734 2.643 - 2.655: 98.7078% ( 1) 00:17:22.734 2.655 - 2.667: 98.7161% ( 1) 00:17:22.734 2.702 - 2.714: 98.7411% ( 3) 00:17:22.734 2.714 - 2.726: 98.7495% ( 1) 00:17:22.734 2.726 - 2.738: 98.7662% ( 2) 00:17:22.734 2.738 - 2.750: 98.7828% ( 2) 00:17:22.734 2.750 - 2.761: 98.8078% ( 3) 00:17:22.734 2.761 - 2.773: 98.8245% ( 2) 00:17:22.734 2.773 - 2.785: 98.8328% ( 1) 00:17:22.734 2.785 - 2.797: 98.8412% ( 1) 00:17:22.734 2.809 - 2.821: 98.8579% ( 2) 00:17:22.734 3.295 - 3.319: 98.8662% ( 1) 00:17:22.735 3.319 - 3.342: 98.8745% ( 1) 00:17:22.735 3.342 - 3.366: 98.8829% ( 1) 00:17:22.735 3.366 - 3.390: 98.9079% ( 3) 00:17:22.735 3.390 - 3.413: 98.9162% ( 1) 00:17:22.735 3.413 - 3.437: 98.9746% ( 7) 00:17:22.735 3.437 - 3.461: 98.9996% ( 3) 00:17:22.735 3.461 - 3.484: 99.0079% ( 1) 00:17:22.735 3.484 - 3.508: 99.0163% ( 1) 00:17:22.735 3.508 - 3.532: 99.0329% ( 2) 00:17:22.735 3.532 - 3.556: 99.0579% ( 3) 00:17:22.735 3.556 - 3.579: 99.0746% ( 2) 00:17:22.735 3.579 - 3.603: 99.0830% ( 1) 00:17:22.735 3.603 - 3.627: 99.0913% ( 1) 00:17:22.735 3.650 - 3.674: 99.1080% ( 2) 00:17:22.735 3.721 - 3.745: 99.1163% ( 1) 00:17:22.735 5.547 - 5.570: 99.1246% ( 1) 00:17:22.735 5.855 - 5.879: 99.1330% ( 1) 00:17:22.735 5.879 - 5.902: 99.1413% ( 1) 00:17:22.735 6.163 - 6.210: 99.1496% ( 1) 00:17:22.735 6.210 - 6.258: 99.1580% ( 1) 00:17:22.735 6.305 - 6.353: 99.1747% ( 2) 00:17:22.735 6.400 - 6.447: 99.1830% ( 1) 00:17:22.735 6.542 - 6.590: 99.1913% ( 1) 00:17:22.735 6.732 - 6.779: 99.1997% ( 1) 00:17:22.735 6.969 - 7.016: 99.2080% ( 1) 00:17:22.735 7.016 - 7.064: 99.2163% ( 1) 00:17:22.735 7.111 - 7.159: 99.2247% ( 1) 00:17:22.735 7.206 - 7.253: 99.2330% ( 1) 00:17:22.735 7.680 - 7.727: 99.2414% ( 1) 00:17:22.735 7.727 - 7.775: 99.2497% ( 1) 00:17:22.735 7.964 - 8.012: 99.2580% ( 1) 00:17:22.735 8.154 - 8.201: 99.2664% ( 1) 00:17:22.735 8.249 - 8.296: 99.2747% ( 1) 00:17:22.735 8.486 - 8.533: 99.2830% ( 1) 00:17:22.735 8.533 - 8.581: 99.2914% ( 1) 00:17:22.735 8.865 - 8.913: 99.2997% ( 1) 00:17:22.735 10.098 - 10.145: 99.3164% ( 2) 00:17:22.735 10.430 - 10.477: 99.3247% ( 1) 00:17:22.735 15.455 - 15.550: 99.3331% ( 1) 00:17:22.735 2390.850 - 2402.987: 99.3414% ( 1) 00:17:22.735 3519.526 - 3543.799: 99.3497% ( 1) 00:17:22.735 3980.705 - 4004.978: 100.0000% ( 78) 00:17:22.735 00:17:22.735 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:22.735 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:22.735 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:22.735 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:22.735 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:22.994 [ 00:17:22.994 { 00:17:22.994 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:22.994 "subtype": "Discovery", 00:17:22.994 "listen_addresses": [], 00:17:22.994 "allow_any_host": true, 00:17:22.994 "hosts": [] 00:17:22.994 }, 00:17:22.994 { 00:17:22.994 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:22.994 "subtype": "NVMe", 00:17:22.994 "listen_addresses": [ 00:17:22.994 { 00:17:22.994 "trtype": "VFIOUSER", 00:17:22.994 "adrfam": "IPv4", 00:17:22.994 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:22.994 "trsvcid": "0" 00:17:22.994 } 00:17:22.994 ], 00:17:22.994 "allow_any_host": true, 00:17:22.994 "hosts": [], 00:17:22.994 "serial_number": "SPDK1", 00:17:22.994 "model_number": "SPDK bdev Controller", 00:17:22.994 "max_namespaces": 32, 00:17:22.994 "min_cntlid": 1, 00:17:22.994 "max_cntlid": 65519, 00:17:22.994 "namespaces": [ 00:17:22.994 { 00:17:22.994 "nsid": 1, 00:17:22.994 "bdev_name": "Malloc1", 00:17:22.994 "name": "Malloc1", 00:17:22.994 "nguid": "01F89151B43346D794FAC7CF8AA31F11", 00:17:22.994 "uuid": "01f89151-b433-46d7-94fa-c7cf8aa31f11" 00:17:22.994 }, 00:17:22.994 { 00:17:22.994 "nsid": 2, 00:17:22.994 "bdev_name": "Malloc3", 00:17:22.994 "name": "Malloc3", 00:17:22.994 "nguid": "3BA16FD414D24246AD32350EDF0AEAB4", 00:17:22.994 "uuid": "3ba16fd4-14d2-4246-ad32-350edf0aeab4" 00:17:22.994 } 00:17:22.994 ] 00:17:22.994 }, 00:17:22.994 { 00:17:22.994 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:22.994 "subtype": "NVMe", 00:17:22.994 "listen_addresses": [ 00:17:22.994 { 00:17:22.994 "trtype": "VFIOUSER", 00:17:22.994 "adrfam": "IPv4", 00:17:22.994 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:22.994 "trsvcid": "0" 00:17:22.994 } 00:17:22.994 ], 00:17:22.994 "allow_any_host": true, 00:17:22.994 "hosts": [], 00:17:22.994 "serial_number": "SPDK2", 00:17:22.994 "model_number": "SPDK bdev Controller", 00:17:22.994 "max_namespaces": 32, 00:17:22.994 "min_cntlid": 1, 00:17:22.994 "max_cntlid": 65519, 00:17:22.994 "namespaces": [ 00:17:22.994 { 00:17:22.994 "nsid": 1, 00:17:22.994 "bdev_name": "Malloc2", 00:17:22.994 "name": "Malloc2", 00:17:22.994 "nguid": "7D5F264FC0024BE9A69DB075BEC88A6D", 00:17:22.994 "uuid": "7d5f264f-c002-4be9-a69d-b075bec88a6d" 00:17:22.994 } 00:17:22.994 ] 00:17:22.994 } 00:17:22.994 ] 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2509310 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:22.994 14:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:22.994 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.281 [2024-07-26 14:11:39.901954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:23.281 Malloc4 00:17:23.281 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:23.850 [2024-07-26 14:11:40.658983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:23.850 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:23.850 Asynchronous Event Request test 00:17:23.850 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:23.850 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:23.850 Registering asynchronous event callbacks... 00:17:23.850 Starting namespace attribute notice tests for all controllers... 00:17:23.850 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:23.850 aer_cb - Changed Namespace 00:17:23.850 Cleaning up... 00:17:24.416 [ 00:17:24.416 { 00:17:24.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:24.416 "subtype": "Discovery", 00:17:24.416 "listen_addresses": [], 00:17:24.416 "allow_any_host": true, 00:17:24.416 "hosts": [] 00:17:24.416 }, 00:17:24.416 { 00:17:24.416 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:24.416 "subtype": "NVMe", 00:17:24.416 "listen_addresses": [ 00:17:24.416 { 00:17:24.416 "trtype": "VFIOUSER", 00:17:24.416 "adrfam": "IPv4", 00:17:24.416 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:24.416 "trsvcid": "0" 00:17:24.416 } 00:17:24.416 ], 00:17:24.416 "allow_any_host": true, 00:17:24.416 "hosts": [], 00:17:24.416 "serial_number": "SPDK1", 00:17:24.416 "model_number": "SPDK bdev Controller", 00:17:24.416 "max_namespaces": 32, 00:17:24.416 "min_cntlid": 1, 00:17:24.416 "max_cntlid": 65519, 00:17:24.416 "namespaces": [ 00:17:24.416 { 00:17:24.416 "nsid": 1, 00:17:24.416 "bdev_name": "Malloc1", 00:17:24.416 "name": "Malloc1", 00:17:24.416 "nguid": "01F89151B43346D794FAC7CF8AA31F11", 00:17:24.416 "uuid": "01f89151-b433-46d7-94fa-c7cf8aa31f11" 00:17:24.416 }, 00:17:24.416 { 00:17:24.417 "nsid": 2, 00:17:24.417 "bdev_name": "Malloc3", 00:17:24.417 "name": "Malloc3", 00:17:24.417 "nguid": "3BA16FD414D24246AD32350EDF0AEAB4", 00:17:24.417 "uuid": "3ba16fd4-14d2-4246-ad32-350edf0aeab4" 00:17:24.417 } 00:17:24.417 ] 00:17:24.417 }, 00:17:24.417 { 00:17:24.417 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:24.417 "subtype": "NVMe", 00:17:24.417 "listen_addresses": [ 00:17:24.417 { 00:17:24.417 "trtype": "VFIOUSER", 00:17:24.417 "adrfam": "IPv4", 00:17:24.417 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:24.417 "trsvcid": "0" 00:17:24.417 } 00:17:24.417 ], 00:17:24.417 "allow_any_host": true, 00:17:24.417 "hosts": [], 00:17:24.417 "serial_number": "SPDK2", 00:17:24.417 "model_number": "SPDK bdev Controller", 00:17:24.417 "max_namespaces": 32, 00:17:24.417 "min_cntlid": 1, 00:17:24.417 "max_cntlid": 65519, 00:17:24.417 "namespaces": [ 00:17:24.417 { 00:17:24.417 "nsid": 1, 00:17:24.417 "bdev_name": "Malloc2", 00:17:24.417 "name": "Malloc2", 00:17:24.417 "nguid": "7D5F264FC0024BE9A69DB075BEC88A6D", 00:17:24.417 "uuid": "7d5f264f-c002-4be9-a69d-b075bec88a6d" 00:17:24.417 }, 00:17:24.417 { 00:17:24.417 "nsid": 2, 00:17:24.417 "bdev_name": "Malloc4", 00:17:24.417 "name": "Malloc4", 00:17:24.417 "nguid": "B6AF76A688294F688EE2172B2731E80E", 00:17:24.417 "uuid": "b6af76a6-8829-4f68-8ee2-172b2731e80e" 00:17:24.417 } 00:17:24.417 ] 00:17:24.417 } 00:17:24.417 ] 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2509310 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2503471 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2503471 ']' 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2503471 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2503471 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2503471' 00:17:24.417 killing process with pid 2503471 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2503471 00:17:24.417 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2503471 00:17:24.675 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:24.675 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:24.675 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:24.675 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:24.675 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:24.675 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2509459 00:17:24.675 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2509459' 00:17:24.676 Process pid: 2509459 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2509459 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2509459 ']' 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.676 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:24.676 [2024-07-26 14:11:41.492644] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:24.676 [2024-07-26 14:11:41.493862] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:17:24.676 [2024-07-26 14:11:41.493928] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.676 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.934 [2024-07-26 14:11:41.563489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.934 [2024-07-26 14:11:41.689410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.934 [2024-07-26 14:11:41.689485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.934 [2024-07-26 14:11:41.689502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.934 [2024-07-26 14:11:41.689516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.934 [2024-07-26 14:11:41.689528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.934 [2024-07-26 14:11:41.689603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.934 [2024-07-26 14:11:41.689658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.934 [2024-07-26 14:11:41.689710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.934 [2024-07-26 14:11:41.689713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.934 [2024-07-26 14:11:41.801268] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:24.934 [2024-07-26 14:11:41.801529] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:24.934 [2024-07-26 14:11:41.801806] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:24.934 [2024-07-26 14:11:41.802508] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:24.934 [2024-07-26 14:11:41.802774] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:25.192 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.192 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:25.192 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:26.123 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:26.382 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:26.382 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:26.382 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:26.382 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:26.382 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:26.641 Malloc1 00:17:26.641 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:27.208 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:27.774 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:28.031 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:28.031 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:28.031 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:28.289 Malloc2 00:17:28.290 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:28.855 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:29.113 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:29.678 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:29.678 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2509459 00:17:29.678 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2509459 ']' 00:17:29.678 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2509459 00:17:29.678 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:29.678 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.679 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2509459 00:17:29.679 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.679 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.679 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2509459' 00:17:29.679 killing process with pid 2509459 00:17:29.679 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2509459 00:17:29.679 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2509459 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:29.949 00:17:29.949 real 0m57.247s 00:17:29.949 user 3m46.410s 00:17:29.949 sys 0m5.494s 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:29.949 ************************************ 00:17:29.949 END TEST nvmf_vfio_user 00:17:29.949 ************************************ 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.949 ************************************ 00:17:29.949 START TEST nvmf_vfio_user_nvme_compliance 00:17:29.949 ************************************ 00:17:29.949 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:30.208 * Looking for test storage... 00:17:30.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2510190 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2510190' 00:17:30.208 Process pid: 2510190 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2510190 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2510190 ']' 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.208 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:30.208 [2024-07-26 14:11:46.939885] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:17:30.209 [2024-07-26 14:11:46.939987] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.209 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.209 [2024-07-26 14:11:47.007404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:30.467 [2024-07-26 14:11:47.133819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.467 [2024-07-26 14:11:47.133885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.467 [2024-07-26 14:11:47.133901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.467 [2024-07-26 14:11:47.133915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.467 [2024-07-26 14:11:47.133927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.467 [2024-07-26 14:11:47.133987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.467 [2024-07-26 14:11:47.134042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.467 [2024-07-26 14:11:47.134045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.467 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.467 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:30.467 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.400 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.658 malloc0 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.658 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:31.658 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.658 00:17:31.658 00:17:31.659 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.659 http://cunit.sourceforge.net/ 00:17:31.659 00:17:31.659 00:17:31.659 Suite: nvme_compliance 00:17:31.659 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 14:11:48.512979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:31.659 [2024-07-26 14:11:48.514534] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:31.659 [2024-07-26 14:11:48.514564] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:31.659 [2024-07-26 14:11:48.514579] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:31.659 [2024-07-26 14:11:48.516015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:31.916 passed 00:17:31.916 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 14:11:48.611728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:31.916 [2024-07-26 14:11:48.614754] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:31.916 passed 00:17:31.916 Test: admin_identify_ns ...[2024-07-26 14:11:48.712247] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:31.916 [2024-07-26 14:11:48.771452] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:31.916 [2024-07-26 14:11:48.779449] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:31.916 [2024-07-26 14:11:48.800575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.174 passed 00:17:32.174 Test: admin_get_features_mandatory_features ...[2024-07-26 14:11:48.893588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.174 [2024-07-26 14:11:48.896615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.174 passed 00:17:32.174 Test: admin_get_features_optional_features ...[2024-07-26 14:11:48.990264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.174 [2024-07-26 14:11:48.993296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.174 passed 00:17:32.432 Test: admin_set_features_number_of_queues ...[2024-07-26 14:11:49.085207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.432 [2024-07-26 14:11:49.189553] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.432 passed 00:17:32.432 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 14:11:49.279981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.432 [2024-07-26 14:11:49.283004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.689 passed 00:17:32.689 Test: admin_get_log_page_with_lpo ...[2024-07-26 14:11:49.377732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.689 [2024-07-26 14:11:49.445463] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:32.689 [2024-07-26 14:11:49.458529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.689 passed 00:17:32.689 Test: fabric_property_get ...[2024-07-26 14:11:49.550805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.689 [2024-07-26 14:11:49.552130] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:32.689 [2024-07-26 14:11:49.554838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.947 passed 00:17:32.947 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 14:11:49.648482] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.947 [2024-07-26 14:11:49.649815] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:32.947 [2024-07-26 14:11:49.651509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:32.947 passed 00:17:32.947 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 14:11:49.745213] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:32.947 [2024-07-26 14:11:49.828442] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:33.205 [2024-07-26 14:11:49.844443] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:33.205 [2024-07-26 14:11:49.849566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.205 passed 00:17:33.205 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 14:11:49.941939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.205 [2024-07-26 14:11:49.943276] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:33.205 [2024-07-26 14:11:49.944961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.205 passed 00:17:33.205 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 14:11:50.037502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.463 [2024-07-26 14:11:50.115446] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:33.463 [2024-07-26 14:11:50.139443] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:33.463 [2024-07-26 14:11:50.144731] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.463 passed 00:17:33.463 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 14:11:50.236121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.463 [2024-07-26 14:11:50.237480] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:33.463 [2024-07-26 14:11:50.237538] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:33.463 [2024-07-26 14:11:50.239155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.463 passed 00:17:33.463 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 14:11:50.332882] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.720 [2024-07-26 14:11:50.424443] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:33.721 [2024-07-26 14:11:50.432456] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:33.721 [2024-07-26 14:11:50.440448] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:33.721 [2024-07-26 14:11:50.448445] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:33.721 [2024-07-26 14:11:50.477548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.721 passed 00:17:33.721 Test: admin_create_io_sq_verify_pc ...[2024-07-26 14:11:50.569558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:33.721 [2024-07-26 14:11:50.586456] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:33.721 [2024-07-26 14:11:50.604101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:33.978 passed 00:17:33.978 Test: admin_create_io_qp_max_qps ...[2024-07-26 14:11:50.693733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:34.911 [2024-07-26 14:11:51.786446] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:35.476 [2024-07-26 14:11:52.167008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.476 passed 00:17:35.476 Test: admin_create_io_sq_shared_cq ...[2024-07-26 14:11:52.254784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:35.734 [2024-07-26 14:11:52.390443] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:35.734 [2024-07-26 14:11:52.427537] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:35.734 passed 00:17:35.734 00:17:35.734 Run Summary: Type Total Ran Passed Failed Inactive 00:17:35.734 suites 1 1 n/a 0 0 00:17:35.734 tests 18 18 18 0 0 00:17:35.734 asserts 360 360 360 0 n/a 00:17:35.734 00:17:35.734 Elapsed time = 1.640 seconds 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2510190 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2510190 ']' 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2510190 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2510190 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2510190' 00:17:35.735 killing process with pid 2510190 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2510190 00:17:35.735 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2510190 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:35.995 00:17:35.995 real 0m6.038s 00:17:35.995 user 0m16.791s 00:17:35.995 sys 0m0.614s 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:35.995 ************************************ 00:17:35.995 END TEST nvmf_vfio_user_nvme_compliance 00:17:35.995 ************************************ 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.995 ************************************ 00:17:35.995 START TEST nvmf_vfio_user_fuzz 00:17:35.995 ************************************ 00:17:35.995 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:36.254 * Looking for test storage... 00:17:36.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.254 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2510913 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2510913' 00:17:36.255 Process pid: 2510913 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2510913 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2510913 ']' 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.255 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:36.586 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:36.586 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:36.586 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:37.534 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:37.534 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.534 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:37.534 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.534 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:37.534 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:37.535 malloc0 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.535 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:37.793 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.793 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:37.793 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.793 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:37.793 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.793 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:37.793 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:09.854 Fuzzing completed. Shutting down the fuzz application 00:18:09.854 00:18:09.854 Dumping successful admin opcodes: 00:18:09.854 8, 9, 10, 24, 00:18:09.854 Dumping successful io opcodes: 00:18:09.854 0, 00:18:09.854 NS: 0x200003a1ef00 I/O qp, Total commands completed: 573478, total successful commands: 2211, random_seed: 2492862400 00:18:09.854 NS: 0x200003a1ef00 admin qp, Total commands completed: 93614, total successful commands: 757, random_seed: 3779169024 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2510913 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2510913 ']' 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2510913 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2510913 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.854 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2510913' 00:18:09.855 killing process with pid 2510913 00:18:09.855 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2510913 00:18:09.855 14:12:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2510913 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:09.855 00:18:09.855 real 0m33.462s 00:18:09.855 user 0m33.295s 00:18:09.855 sys 0m26.878s 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:09.855 ************************************ 00:18:09.855 END TEST nvmf_vfio_user_fuzz 00:18:09.855 ************************************ 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.855 ************************************ 00:18:09.855 START TEST nvmf_auth_target 00:18:09.855 ************************************ 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:09.855 * Looking for test storage... 00:18:09.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.855 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:12.392 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:12.393 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:12.393 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:12.393 Found net devices under 0000:84:00.0: cvl_0_0 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:12.393 Found net devices under 0000:84:00.1: cvl_0_1 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:12.393 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:12.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:18:12.652 00:18:12.652 --- 10.0.0.2 ping statistics --- 00:18:12.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.652 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:18:12.652 00:18:12.652 --- 10.0.0.1 ping statistics --- 00:18:12.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.652 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2517050 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2517050 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2517050 ']' 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.652 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.911 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.911 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:12.911 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.911 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:12.911 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.911 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.911 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2517140 00:18:12.911 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ef96e57d53a20a4c170ede3e847133bd5cce79968e501af3 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hLm 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ef96e57d53a20a4c170ede3e847133bd5cce79968e501af3 0 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ef96e57d53a20a4c170ede3e847133bd5cce79968e501af3 0 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ef96e57d53a20a4c170ede3e847133bd5cce79968e501af3 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:12.912 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hLm 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hLm 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.hLm 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fbd8a29109a79355d66a8caa90afaab76a70b2702edc89a395db3060a648661b 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jXq 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fbd8a29109a79355d66a8caa90afaab76a70b2702edc89a395db3060a648661b 3 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fbd8a29109a79355d66a8caa90afaab76a70b2702edc89a395db3060a648661b 3 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fbd8a29109a79355d66a8caa90afaab76a70b2702edc89a395db3060a648661b 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jXq 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jXq 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.jXq 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.172 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7c68c99c97a47b7a4252dc118df1bfb6 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.stj 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7c68c99c97a47b7a4252dc118df1bfb6 1 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7c68c99c97a47b7a4252dc118df1bfb6 1 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7c68c99c97a47b7a4252dc118df1bfb6 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.stj 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.stj 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.stj 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7ddcf052b881efa08b34ef0652314c7f172b770d875e7e57 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tlX 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7ddcf052b881efa08b34ef0652314c7f172b770d875e7e57 2 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7ddcf052b881efa08b34ef0652314c7f172b770d875e7e57 2 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7ddcf052b881efa08b34ef0652314c7f172b770d875e7e57 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:13.173 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tlX 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tlX 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.tlX 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dd1538ef40b88085dfcafc2d05760bb07354b269fa40b275 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Zrq 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dd1538ef40b88085dfcafc2d05760bb07354b269fa40b275 2 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dd1538ef40b88085dfcafc2d05760bb07354b269fa40b275 2 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dd1538ef40b88085dfcafc2d05760bb07354b269fa40b275 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:13.173 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Zrq 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Zrq 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Zrq 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=62c03f21c43af9f5d8f6fd35e8c45603 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jLx 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 62c03f21c43af9f5d8f6fd35e8c45603 1 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 62c03f21c43af9f5d8f6fd35e8c45603 1 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=62c03f21c43af9f5d8f6fd35e8c45603 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jLx 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jLx 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.jLx 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1a7bccb6b94e4ddad8b875e307e965b424ea1f19d7c85bd45d745b7785d86f8c 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dOJ 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1a7bccb6b94e4ddad8b875e307e965b424ea1f19d7c85bd45d745b7785d86f8c 3 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1a7bccb6b94e4ddad8b875e307e965b424ea1f19d7c85bd45d745b7785d86f8c 3 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1a7bccb6b94e4ddad8b875e307e965b424ea1f19d7c85bd45d745b7785d86f8c 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dOJ 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dOJ 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.dOJ 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2517050 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2517050 ']' 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.433 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2517140 /var/tmp/host.sock 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2517140 ']' 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:13.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.999 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hLm 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hLm 00:18:14.258 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hLm 00:18:14.516 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.jXq ]] 00:18:14.516 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jXq 00:18:14.516 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.516 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.516 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.516 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jXq 00:18:14.516 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jXq 00:18:15.083 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:15.083 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.stj 00:18:15.083 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.083 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.083 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.083 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.stj 00:18:15.083 14:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.stj 00:18:15.340 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.tlX ]] 00:18:15.340 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tlX 00:18:15.340 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.340 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.340 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.340 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tlX 00:18:15.340 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tlX 00:18:15.597 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:15.597 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Zrq 00:18:15.597 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.597 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.597 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.597 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Zrq 00:18:15.597 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Zrq 00:18:16.161 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.jLx ]] 00:18:16.161 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jLx 00:18:16.161 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.161 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.161 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.161 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jLx 00:18:16.161 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jLx 00:18:16.419 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:16.419 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dOJ 00:18:16.419 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.419 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.419 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.419 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dOJ 00:18:16.419 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dOJ 00:18:16.676 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:16.676 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:16.676 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.676 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.676 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:16.676 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.241 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.844 00:18:17.844 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.844 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.844 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.844 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.844 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.844 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.844 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.101 { 00:18:18.101 "cntlid": 1, 00:18:18.101 "qid": 0, 00:18:18.101 "state": "enabled", 00:18:18.101 "thread": "nvmf_tgt_poll_group_000", 00:18:18.101 "listen_address": { 00:18:18.101 "trtype": "TCP", 00:18:18.101 "adrfam": "IPv4", 00:18:18.101 "traddr": "10.0.0.2", 00:18:18.101 "trsvcid": "4420" 00:18:18.101 }, 00:18:18.101 "peer_address": { 00:18:18.101 "trtype": "TCP", 00:18:18.101 "adrfam": "IPv4", 00:18:18.101 "traddr": "10.0.0.1", 00:18:18.101 "trsvcid": "45384" 00:18:18.101 }, 00:18:18.101 "auth": { 00:18:18.101 "state": "completed", 00:18:18.101 "digest": "sha256", 00:18:18.101 "dhgroup": "null" 00:18:18.101 } 00:18:18.101 } 00:18:18.101 ]' 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.101 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.666 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:18:19.598 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.598 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:19.598 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.598 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.598 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.598 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.598 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:19.598 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.163 14:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.096 00:18:21.096 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.096 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.096 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.354 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.354 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.354 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.354 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.354 { 00:18:21.354 "cntlid": 3, 00:18:21.354 "qid": 0, 00:18:21.354 "state": "enabled", 00:18:21.354 "thread": "nvmf_tgt_poll_group_000", 00:18:21.354 "listen_address": { 00:18:21.354 "trtype": "TCP", 00:18:21.354 "adrfam": "IPv4", 00:18:21.354 "traddr": "10.0.0.2", 00:18:21.354 "trsvcid": "4420" 00:18:21.354 }, 00:18:21.354 "peer_address": { 00:18:21.354 "trtype": "TCP", 00:18:21.354 "adrfam": "IPv4", 00:18:21.354 "traddr": "10.0.0.1", 00:18:21.354 "trsvcid": "45420" 00:18:21.354 }, 00:18:21.354 "auth": { 00:18:21.354 "state": "completed", 00:18:21.354 "digest": "sha256", 00:18:21.354 "dhgroup": "null" 00:18:21.354 } 00:18:21.354 } 00:18:21.354 ]' 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.354 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.919 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:18:23.293 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.293 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.293 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.293 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.293 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.293 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.293 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:23.293 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.293 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.858 00:18:23.858 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.858 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.858 14:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.424 { 00:18:24.424 "cntlid": 5, 00:18:24.424 "qid": 0, 00:18:24.425 "state": "enabled", 00:18:24.425 "thread": "nvmf_tgt_poll_group_000", 00:18:24.425 "listen_address": { 00:18:24.425 "trtype": "TCP", 00:18:24.425 "adrfam": "IPv4", 00:18:24.425 "traddr": "10.0.0.2", 00:18:24.425 "trsvcid": "4420" 00:18:24.425 }, 00:18:24.425 "peer_address": { 00:18:24.425 "trtype": "TCP", 00:18:24.425 "adrfam": "IPv4", 00:18:24.425 "traddr": "10.0.0.1", 00:18:24.425 "trsvcid": "51852" 00:18:24.425 }, 00:18:24.425 "auth": { 00:18:24.425 "state": "completed", 00:18:24.425 "digest": "sha256", 00:18:24.425 "dhgroup": "null" 00:18:24.425 } 00:18:24.425 } 00:18:24.425 ]' 00:18:24.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.683 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.057 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.989 00:18:26.989 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.989 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.989 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.555 { 00:18:27.555 "cntlid": 7, 00:18:27.555 "qid": 0, 00:18:27.555 "state": "enabled", 00:18:27.555 "thread": "nvmf_tgt_poll_group_000", 00:18:27.555 "listen_address": { 00:18:27.555 "trtype": "TCP", 00:18:27.555 "adrfam": "IPv4", 00:18:27.555 "traddr": "10.0.0.2", 00:18:27.555 "trsvcid": "4420" 00:18:27.555 }, 00:18:27.555 "peer_address": { 00:18:27.555 "trtype": "TCP", 00:18:27.555 "adrfam": "IPv4", 00:18:27.555 "traddr": "10.0.0.1", 00:18:27.555 "trsvcid": "51880" 00:18:27.555 }, 00:18:27.555 "auth": { 00:18:27.555 "state": "completed", 00:18:27.555 "digest": "sha256", 00:18:27.555 "dhgroup": "null" 00:18:27.555 } 00:18:27.555 } 00:18:27.555 ]' 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.555 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.813 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.186 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.186 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.752 00:18:29.752 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.752 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.752 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.010 { 00:18:30.010 "cntlid": 9, 00:18:30.010 "qid": 0, 00:18:30.010 "state": "enabled", 00:18:30.010 "thread": "nvmf_tgt_poll_group_000", 00:18:30.010 "listen_address": { 00:18:30.010 "trtype": "TCP", 00:18:30.010 "adrfam": "IPv4", 00:18:30.010 "traddr": "10.0.0.2", 00:18:30.010 "trsvcid": "4420" 00:18:30.010 }, 00:18:30.010 "peer_address": { 00:18:30.010 "trtype": "TCP", 00:18:30.010 "adrfam": "IPv4", 00:18:30.010 "traddr": "10.0.0.1", 00:18:30.010 "trsvcid": "51910" 00:18:30.010 }, 00:18:30.010 "auth": { 00:18:30.010 "state": "completed", 00:18:30.010 "digest": "sha256", 00:18:30.010 "dhgroup": "ffdhe2048" 00:18:30.010 } 00:18:30.010 } 00:18:30.010 ]' 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.010 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.575 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:18:31.509 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.509 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:31.509 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.509 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.509 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.509 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.509 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.509 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.768 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.339 00:18:32.339 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.339 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.339 14:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.631 { 00:18:32.631 "cntlid": 11, 00:18:32.631 "qid": 0, 00:18:32.631 "state": "enabled", 00:18:32.631 "thread": "nvmf_tgt_poll_group_000", 00:18:32.631 "listen_address": { 00:18:32.631 "trtype": "TCP", 00:18:32.631 "adrfam": "IPv4", 00:18:32.631 "traddr": "10.0.0.2", 00:18:32.631 "trsvcid": "4420" 00:18:32.631 }, 00:18:32.631 "peer_address": { 00:18:32.631 "trtype": "TCP", 00:18:32.631 "adrfam": "IPv4", 00:18:32.631 "traddr": "10.0.0.1", 00:18:32.631 "trsvcid": "51946" 00:18:32.631 }, 00:18:32.631 "auth": { 00:18:32.631 "state": "completed", 00:18:32.631 "digest": "sha256", 00:18:32.631 "dhgroup": "ffdhe2048" 00:18:32.631 } 00:18:32.631 } 00:18:32.631 ]' 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.631 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.897 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.897 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.897 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.897 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.897 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.474 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:18:34.408 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.408 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:34.408 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.408 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.408 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.408 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.408 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.408 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.974 14:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.539 00:18:35.539 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.539 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.539 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.807 { 00:18:35.807 "cntlid": 13, 00:18:35.807 "qid": 0, 00:18:35.807 "state": "enabled", 00:18:35.807 "thread": "nvmf_tgt_poll_group_000", 00:18:35.807 "listen_address": { 00:18:35.807 "trtype": "TCP", 00:18:35.807 "adrfam": "IPv4", 00:18:35.807 "traddr": "10.0.0.2", 00:18:35.807 "trsvcid": "4420" 00:18:35.807 }, 00:18:35.807 "peer_address": { 00:18:35.807 "trtype": "TCP", 00:18:35.807 "adrfam": "IPv4", 00:18:35.807 "traddr": "10.0.0.1", 00:18:35.807 "trsvcid": "34958" 00:18:35.807 }, 00:18:35.807 "auth": { 00:18:35.807 "state": "completed", 00:18:35.807 "digest": "sha256", 00:18:35.807 "dhgroup": "ffdhe2048" 00:18:35.807 } 00:18:35.807 } 00:18:35.807 ]' 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.807 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.808 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.808 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.808 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.808 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.379 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:18:37.750 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.750 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:37.750 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.750 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.750 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.750 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.750 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.750 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.008 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.265 00:18:38.265 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.265 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.265 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.523 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.523 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.523 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.523 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.523 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.523 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.523 { 00:18:38.523 "cntlid": 15, 00:18:38.523 "qid": 0, 00:18:38.523 "state": "enabled", 00:18:38.523 "thread": "nvmf_tgt_poll_group_000", 00:18:38.523 "listen_address": { 00:18:38.523 "trtype": "TCP", 00:18:38.523 "adrfam": "IPv4", 00:18:38.523 "traddr": "10.0.0.2", 00:18:38.523 "trsvcid": "4420" 00:18:38.523 }, 00:18:38.523 "peer_address": { 00:18:38.523 "trtype": "TCP", 00:18:38.523 "adrfam": "IPv4", 00:18:38.523 "traddr": "10.0.0.1", 00:18:38.523 "trsvcid": "34986" 00:18:38.523 }, 00:18:38.523 "auth": { 00:18:38.523 "state": "completed", 00:18:38.523 "digest": "sha256", 00:18:38.523 "dhgroup": "ffdhe2048" 00:18:38.524 } 00:18:38.524 } 00:18:38.524 ]' 00:18:38.524 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.781 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.781 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.781 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.781 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.781 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.781 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.781 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.038 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:40.411 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.670 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.235 00:18:41.235 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.235 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.235 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.802 { 00:18:41.802 "cntlid": 17, 00:18:41.802 "qid": 0, 00:18:41.802 "state": "enabled", 00:18:41.802 "thread": "nvmf_tgt_poll_group_000", 00:18:41.802 "listen_address": { 00:18:41.802 "trtype": "TCP", 00:18:41.802 "adrfam": "IPv4", 00:18:41.802 "traddr": "10.0.0.2", 00:18:41.802 "trsvcid": "4420" 00:18:41.802 }, 00:18:41.802 "peer_address": { 00:18:41.802 "trtype": "TCP", 00:18:41.802 "adrfam": "IPv4", 00:18:41.802 "traddr": "10.0.0.1", 00:18:41.802 "trsvcid": "35018" 00:18:41.802 }, 00:18:41.802 "auth": { 00:18:41.802 "state": "completed", 00:18:41.802 "digest": "sha256", 00:18:41.802 "dhgroup": "ffdhe3072" 00:18:41.802 } 00:18:41.802 } 00:18:41.802 ]' 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.802 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.060 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.060 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.060 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.626 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:18:43.560 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.560 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:43.560 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.560 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.560 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.560 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.560 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:43.560 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.125 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.382 00:18:44.382 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.382 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.382 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.948 { 00:18:44.948 "cntlid": 19, 00:18:44.948 "qid": 0, 00:18:44.948 "state": "enabled", 00:18:44.948 "thread": "nvmf_tgt_poll_group_000", 00:18:44.948 "listen_address": { 00:18:44.948 "trtype": "TCP", 00:18:44.948 "adrfam": "IPv4", 00:18:44.948 "traddr": "10.0.0.2", 00:18:44.948 "trsvcid": "4420" 00:18:44.948 }, 00:18:44.948 "peer_address": { 00:18:44.948 "trtype": "TCP", 00:18:44.948 "adrfam": "IPv4", 00:18:44.948 "traddr": "10.0.0.1", 00:18:44.948 "trsvcid": "39412" 00:18:44.948 }, 00:18:44.948 "auth": { 00:18:44.948 "state": "completed", 00:18:44.948 "digest": "sha256", 00:18:44.948 "dhgroup": "ffdhe3072" 00:18:44.948 } 00:18:44.948 } 00:18:44.948 ]' 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.948 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.206 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.587 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.206 00:18:47.206 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.206 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.206 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.463 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.463 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.463 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.463 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.463 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.463 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.463 { 00:18:47.463 "cntlid": 21, 00:18:47.463 "qid": 0, 00:18:47.463 "state": "enabled", 00:18:47.463 "thread": "nvmf_tgt_poll_group_000", 00:18:47.463 "listen_address": { 00:18:47.463 "trtype": "TCP", 00:18:47.463 "adrfam": "IPv4", 00:18:47.463 "traddr": "10.0.0.2", 00:18:47.463 "trsvcid": "4420" 00:18:47.463 }, 00:18:47.463 "peer_address": { 00:18:47.463 "trtype": "TCP", 00:18:47.463 "adrfam": "IPv4", 00:18:47.463 "traddr": "10.0.0.1", 00:18:47.464 "trsvcid": "39448" 00:18:47.464 }, 00:18:47.464 "auth": { 00:18:47.464 "state": "completed", 00:18:47.464 "digest": "sha256", 00:18:47.464 "dhgroup": "ffdhe3072" 00:18:47.464 } 00:18:47.464 } 00:18:47.464 ]' 00:18:47.464 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.464 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.464 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.464 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.464 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.721 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.721 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.721 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.979 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:18:48.913 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.913 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:48.913 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.913 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.913 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.913 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.913 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:48.913 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.170 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:49.170 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.170 14:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.171 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.736 00:18:49.736 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.736 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.736 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.994 { 00:18:49.994 "cntlid": 23, 00:18:49.994 "qid": 0, 00:18:49.994 "state": "enabled", 00:18:49.994 "thread": "nvmf_tgt_poll_group_000", 00:18:49.994 "listen_address": { 00:18:49.994 "trtype": "TCP", 00:18:49.994 "adrfam": "IPv4", 00:18:49.994 "traddr": "10.0.0.2", 00:18:49.994 "trsvcid": "4420" 00:18:49.994 }, 00:18:49.994 "peer_address": { 00:18:49.994 "trtype": "TCP", 00:18:49.994 "adrfam": "IPv4", 00:18:49.994 "traddr": "10.0.0.1", 00:18:49.994 "trsvcid": "39476" 00:18:49.994 }, 00:18:49.994 "auth": { 00:18:49.994 "state": "completed", 00:18:49.994 "digest": "sha256", 00:18:49.994 "dhgroup": "ffdhe3072" 00:18:49.994 } 00:18:49.994 } 00:18:49.994 ]' 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.994 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.560 14:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:51.933 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.191 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.192 14:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.755 00:18:53.012 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.012 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.012 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.270 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.270 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.270 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.270 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.270 { 00:18:53.270 "cntlid": 25, 00:18:53.270 "qid": 0, 00:18:53.270 "state": "enabled", 00:18:53.270 "thread": "nvmf_tgt_poll_group_000", 00:18:53.270 "listen_address": { 00:18:53.270 "trtype": "TCP", 00:18:53.270 "adrfam": "IPv4", 00:18:53.270 "traddr": "10.0.0.2", 00:18:53.270 "trsvcid": "4420" 00:18:53.270 }, 00:18:53.270 "peer_address": { 00:18:53.270 "trtype": "TCP", 00:18:53.270 "adrfam": "IPv4", 00:18:53.270 "traddr": "10.0.0.1", 00:18:53.270 "trsvcid": "60600" 00:18:53.270 }, 00:18:53.270 "auth": { 00:18:53.270 "state": "completed", 00:18:53.270 "digest": "sha256", 00:18:53.270 "dhgroup": "ffdhe4096" 00:18:53.270 } 00:18:53.270 } 00:18:53.270 ]' 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.270 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.833 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:18:55.205 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.205 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:55.205 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.205 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.205 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.205 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.205 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.205 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.463 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.028 00:18:56.028 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.028 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.028 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.592 { 00:18:56.592 "cntlid": 27, 00:18:56.592 "qid": 0, 00:18:56.592 "state": "enabled", 00:18:56.592 "thread": "nvmf_tgt_poll_group_000", 00:18:56.592 "listen_address": { 00:18:56.592 "trtype": "TCP", 00:18:56.592 "adrfam": "IPv4", 00:18:56.592 "traddr": "10.0.0.2", 00:18:56.592 "trsvcid": "4420" 00:18:56.592 }, 00:18:56.592 "peer_address": { 00:18:56.592 "trtype": "TCP", 00:18:56.592 "adrfam": "IPv4", 00:18:56.592 "traddr": "10.0.0.1", 00:18:56.592 "trsvcid": "60634" 00:18:56.592 }, 00:18:56.592 "auth": { 00:18:56.592 "state": "completed", 00:18:56.592 "digest": "sha256", 00:18:56.592 "dhgroup": "ffdhe4096" 00:18:56.592 } 00:18:56.592 } 00:18:56.592 ]' 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.592 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.157 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:18:58.088 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.345 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:58.345 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.345 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.345 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.345 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.345 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.345 14:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.603 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.169 00:18:59.169 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.169 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.169 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.733 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.733 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.733 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.733 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.733 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.733 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.733 { 00:18:59.733 "cntlid": 29, 00:18:59.733 "qid": 0, 00:18:59.733 "state": "enabled", 00:18:59.733 "thread": "nvmf_tgt_poll_group_000", 00:18:59.733 "listen_address": { 00:18:59.733 "trtype": "TCP", 00:18:59.733 "adrfam": "IPv4", 00:18:59.733 "traddr": "10.0.0.2", 00:18:59.733 "trsvcid": "4420" 00:18:59.733 }, 00:18:59.733 "peer_address": { 00:18:59.733 "trtype": "TCP", 00:18:59.733 "adrfam": "IPv4", 00:18:59.733 "traddr": "10.0.0.1", 00:18:59.733 "trsvcid": "60662" 00:18:59.733 }, 00:18:59.733 "auth": { 00:18:59.733 "state": "completed", 00:18:59.733 "digest": "sha256", 00:18:59.733 "dhgroup": "ffdhe4096" 00:18:59.733 } 00:18:59.733 } 00:18:59.733 ]' 00:18:59.733 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.991 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.991 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.991 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.991 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.991 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.991 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.991 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.556 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:19:01.524 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.524 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:01.524 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.524 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.524 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.524 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.524 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.524 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.782 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.040 00:19:02.297 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.297 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.297 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.556 { 00:19:02.556 "cntlid": 31, 00:19:02.556 "qid": 0, 00:19:02.556 "state": "enabled", 00:19:02.556 "thread": "nvmf_tgt_poll_group_000", 00:19:02.556 "listen_address": { 00:19:02.556 "trtype": "TCP", 00:19:02.556 "adrfam": "IPv4", 00:19:02.556 "traddr": "10.0.0.2", 00:19:02.556 "trsvcid": "4420" 00:19:02.556 }, 00:19:02.556 "peer_address": { 00:19:02.556 "trtype": "TCP", 00:19:02.556 "adrfam": "IPv4", 00:19:02.556 "traddr": "10.0.0.1", 00:19:02.556 "trsvcid": "60668" 00:19:02.556 }, 00:19:02.556 "auth": { 00:19:02.556 "state": "completed", 00:19:02.556 "digest": "sha256", 00:19:02.556 "dhgroup": "ffdhe4096" 00:19:02.556 } 00:19:02.556 } 00:19:02.556 ]' 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.556 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.491 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.424 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.682 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.615 00:19:05.615 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.615 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.615 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.873 { 00:19:05.873 "cntlid": 33, 00:19:05.873 "qid": 0, 00:19:05.873 "state": "enabled", 00:19:05.873 "thread": "nvmf_tgt_poll_group_000", 00:19:05.873 "listen_address": { 00:19:05.873 "trtype": "TCP", 00:19:05.873 "adrfam": "IPv4", 00:19:05.873 "traddr": "10.0.0.2", 00:19:05.873 "trsvcid": "4420" 00:19:05.873 }, 00:19:05.873 "peer_address": { 00:19:05.873 "trtype": "TCP", 00:19:05.873 "adrfam": "IPv4", 00:19:05.873 "traddr": "10.0.0.1", 00:19:05.873 "trsvcid": "43374" 00:19:05.873 }, 00:19:05.873 "auth": { 00:19:05.873 "state": "completed", 00:19:05.873 "digest": "sha256", 00:19:05.873 "dhgroup": "ffdhe6144" 00:19:05.873 } 00:19:05.873 } 00:19:05.873 ]' 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.873 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.439 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.813 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.377 00:19:08.377 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.377 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.377 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.635 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.635 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.635 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.635 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.635 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.893 { 00:19:08.893 "cntlid": 35, 00:19:08.893 "qid": 0, 00:19:08.893 "state": "enabled", 00:19:08.893 "thread": "nvmf_tgt_poll_group_000", 00:19:08.893 "listen_address": { 00:19:08.893 "trtype": "TCP", 00:19:08.893 "adrfam": "IPv4", 00:19:08.893 "traddr": "10.0.0.2", 00:19:08.893 "trsvcid": "4420" 00:19:08.893 }, 00:19:08.893 "peer_address": { 00:19:08.893 "trtype": "TCP", 00:19:08.893 "adrfam": "IPv4", 00:19:08.893 "traddr": "10.0.0.1", 00:19:08.893 "trsvcid": "43392" 00:19:08.893 }, 00:19:08.893 "auth": { 00:19:08.893 "state": "completed", 00:19:08.893 "digest": "sha256", 00:19:08.893 "dhgroup": "ffdhe6144" 00:19:08.893 } 00:19:08.893 } 00:19:08.893 ]' 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.893 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.458 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:19:10.391 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.391 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:10.391 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.391 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.648 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.648 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.648 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.648 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.906 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.840 00:19:11.840 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.840 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.840 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.405 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.405 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.405 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.405 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.405 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.406 { 00:19:12.406 "cntlid": 37, 00:19:12.406 "qid": 0, 00:19:12.406 "state": "enabled", 00:19:12.406 "thread": "nvmf_tgt_poll_group_000", 00:19:12.406 "listen_address": { 00:19:12.406 "trtype": "TCP", 00:19:12.406 "adrfam": "IPv4", 00:19:12.406 "traddr": "10.0.0.2", 00:19:12.406 "trsvcid": "4420" 00:19:12.406 }, 00:19:12.406 "peer_address": { 00:19:12.406 "trtype": "TCP", 00:19:12.406 "adrfam": "IPv4", 00:19:12.406 "traddr": "10.0.0.1", 00:19:12.406 "trsvcid": "43422" 00:19:12.406 }, 00:19:12.406 "auth": { 00:19:12.406 "state": "completed", 00:19:12.406 "digest": "sha256", 00:19:12.406 "dhgroup": "ffdhe6144" 00:19:12.406 } 00:19:12.406 } 00:19:12.406 ]' 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.406 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.971 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:19:14.342 14:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.342 14:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:14.343 14:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.343 14:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.343 14:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.343 14:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.343 14:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.343 14:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.343 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.273 00:19:15.273 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.273 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.273 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.531 { 00:19:15.531 "cntlid": 39, 00:19:15.531 "qid": 0, 00:19:15.531 "state": "enabled", 00:19:15.531 "thread": "nvmf_tgt_poll_group_000", 00:19:15.531 "listen_address": { 00:19:15.531 "trtype": "TCP", 00:19:15.531 "adrfam": "IPv4", 00:19:15.531 "traddr": "10.0.0.2", 00:19:15.531 "trsvcid": "4420" 00:19:15.531 }, 00:19:15.531 "peer_address": { 00:19:15.531 "trtype": "TCP", 00:19:15.531 "adrfam": "IPv4", 00:19:15.531 "traddr": "10.0.0.1", 00:19:15.531 "trsvcid": "44536" 00:19:15.531 }, 00:19:15.531 "auth": { 00:19:15.531 "state": "completed", 00:19:15.531 "digest": "sha256", 00:19:15.531 "dhgroup": "ffdhe6144" 00:19:15.531 } 00:19:15.531 } 00:19:15.531 ]' 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.531 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.490 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.426 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.684 14:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.615 00:19:18.615 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.615 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.615 14:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.179 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.179 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.179 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.179 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.179 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.179 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.180 { 00:19:19.180 "cntlid": 41, 00:19:19.180 "qid": 0, 00:19:19.180 "state": "enabled", 00:19:19.180 "thread": "nvmf_tgt_poll_group_000", 00:19:19.180 "listen_address": { 00:19:19.180 "trtype": "TCP", 00:19:19.180 "adrfam": "IPv4", 00:19:19.180 "traddr": "10.0.0.2", 00:19:19.180 "trsvcid": "4420" 00:19:19.180 }, 00:19:19.180 "peer_address": { 00:19:19.180 "trtype": "TCP", 00:19:19.180 "adrfam": "IPv4", 00:19:19.180 "traddr": "10.0.0.1", 00:19:19.180 "trsvcid": "44560" 00:19:19.180 }, 00:19:19.180 "auth": { 00:19:19.180 "state": "completed", 00:19:19.180 "digest": "sha256", 00:19:19.180 "dhgroup": "ffdhe8192" 00:19:19.180 } 00:19:19.180 } 00:19:19.180 ]' 00:19:19.180 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.437 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.437 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.437 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.437 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.437 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.437 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.437 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.694 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:19:20.624 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.625 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:20.625 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.625 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.625 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.625 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.625 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.625 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.189 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.122 00:19:22.122 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.122 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.122 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.379 { 00:19:22.379 "cntlid": 43, 00:19:22.379 "qid": 0, 00:19:22.379 "state": "enabled", 00:19:22.379 "thread": "nvmf_tgt_poll_group_000", 00:19:22.379 "listen_address": { 00:19:22.379 "trtype": "TCP", 00:19:22.379 "adrfam": "IPv4", 00:19:22.379 "traddr": "10.0.0.2", 00:19:22.379 "trsvcid": "4420" 00:19:22.379 }, 00:19:22.379 "peer_address": { 00:19:22.379 "trtype": "TCP", 00:19:22.379 "adrfam": "IPv4", 00:19:22.379 "traddr": "10.0.0.1", 00:19:22.379 "trsvcid": "44584" 00:19:22.379 }, 00:19:22.379 "auth": { 00:19:22.379 "state": "completed", 00:19:22.379 "digest": "sha256", 00:19:22.379 "dhgroup": "ffdhe8192" 00:19:22.379 } 00:19:22.379 } 00:19:22.379 ]' 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.379 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.636 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.636 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.636 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.636 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.636 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.893 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:19:23.825 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.825 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:23.825 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.825 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.825 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.825 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.825 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.825 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.390 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.323 00:19:25.323 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.323 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.323 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.888 { 00:19:25.888 "cntlid": 45, 00:19:25.888 "qid": 0, 00:19:25.888 "state": "enabled", 00:19:25.888 "thread": "nvmf_tgt_poll_group_000", 00:19:25.888 "listen_address": { 00:19:25.888 "trtype": "TCP", 00:19:25.888 "adrfam": "IPv4", 00:19:25.888 "traddr": "10.0.0.2", 00:19:25.888 "trsvcid": "4420" 00:19:25.888 }, 00:19:25.888 "peer_address": { 00:19:25.888 "trtype": "TCP", 00:19:25.888 "adrfam": "IPv4", 00:19:25.888 "traddr": "10.0.0.1", 00:19:25.888 "trsvcid": "36592" 00:19:25.888 }, 00:19:25.888 "auth": { 00:19:25.888 "state": "completed", 00:19:25.888 "digest": "sha256", 00:19:25.888 "dhgroup": "ffdhe8192" 00:19:25.888 } 00:19:25.888 } 00:19:25.888 ]' 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.888 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.452 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:19:27.382 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.382 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:27.382 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.382 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.382 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.382 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.382 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.382 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.639 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:27.639 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.639 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.639 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:27.639 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.639 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.639 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:27.640 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.640 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.640 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.640 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.640 14:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.571 00:19:28.571 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.571 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.571 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.152 { 00:19:29.152 "cntlid": 47, 00:19:29.152 "qid": 0, 00:19:29.152 "state": "enabled", 00:19:29.152 "thread": "nvmf_tgt_poll_group_000", 00:19:29.152 "listen_address": { 00:19:29.152 "trtype": "TCP", 00:19:29.152 "adrfam": "IPv4", 00:19:29.152 "traddr": "10.0.0.2", 00:19:29.152 "trsvcid": "4420" 00:19:29.152 }, 00:19:29.152 "peer_address": { 00:19:29.152 "trtype": "TCP", 00:19:29.152 "adrfam": "IPv4", 00:19:29.152 "traddr": "10.0.0.1", 00:19:29.152 "trsvcid": "36624" 00:19:29.152 }, 00:19:29.152 "auth": { 00:19:29.152 "state": "completed", 00:19:29.152 "digest": "sha256", 00:19:29.152 "dhgroup": "ffdhe8192" 00:19:29.152 } 00:19:29.152 } 00:19:29.152 ]' 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.152 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.153 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.153 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.153 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.153 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.153 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.410 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.783 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.093 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.350 00:19:31.351 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.351 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.351 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.916 { 00:19:31.916 "cntlid": 49, 00:19:31.916 "qid": 0, 00:19:31.916 "state": "enabled", 00:19:31.916 "thread": "nvmf_tgt_poll_group_000", 00:19:31.916 "listen_address": { 00:19:31.916 "trtype": "TCP", 00:19:31.916 "adrfam": "IPv4", 00:19:31.916 "traddr": "10.0.0.2", 00:19:31.916 "trsvcid": "4420" 00:19:31.916 }, 00:19:31.916 "peer_address": { 00:19:31.916 "trtype": "TCP", 00:19:31.916 "adrfam": "IPv4", 00:19:31.916 "traddr": "10.0.0.1", 00:19:31.916 "trsvcid": "36642" 00:19:31.916 }, 00:19:31.916 "auth": { 00:19:31.916 "state": "completed", 00:19:31.916 "digest": "sha384", 00:19:31.916 "dhgroup": "null" 00:19:31.916 } 00:19:31.916 } 00:19:31.916 ]' 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.916 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.481 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:19:33.412 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.412 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:33.412 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.412 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.412 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.412 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.412 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.412 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.669 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.233 00:19:34.233 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.233 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.233 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.797 { 00:19:34.797 "cntlid": 51, 00:19:34.797 "qid": 0, 00:19:34.797 "state": "enabled", 00:19:34.797 "thread": "nvmf_tgt_poll_group_000", 00:19:34.797 "listen_address": { 00:19:34.797 "trtype": "TCP", 00:19:34.797 "adrfam": "IPv4", 00:19:34.797 "traddr": "10.0.0.2", 00:19:34.797 "trsvcid": "4420" 00:19:34.797 }, 00:19:34.797 "peer_address": { 00:19:34.797 "trtype": "TCP", 00:19:34.797 "adrfam": "IPv4", 00:19:34.797 "traddr": "10.0.0.1", 00:19:34.797 "trsvcid": "33200" 00:19:34.797 }, 00:19:34.797 "auth": { 00:19:34.797 "state": "completed", 00:19:34.797 "digest": "sha384", 00:19:34.797 "dhgroup": "null" 00:19:34.797 } 00:19:34.797 } 00:19:34.797 ]' 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.797 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.054 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:19:36.425 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.425 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:36.425 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.425 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.425 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.425 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.425 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.425 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.425 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.991 00:19:36.991 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.991 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.991 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.249 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.249 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.249 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.249 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.249 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.249 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.249 { 00:19:37.249 "cntlid": 53, 00:19:37.249 "qid": 0, 00:19:37.249 "state": "enabled", 00:19:37.249 "thread": "nvmf_tgt_poll_group_000", 00:19:37.249 "listen_address": { 00:19:37.249 "trtype": "TCP", 00:19:37.249 "adrfam": "IPv4", 00:19:37.249 "traddr": "10.0.0.2", 00:19:37.249 "trsvcid": "4420" 00:19:37.249 }, 00:19:37.249 "peer_address": { 00:19:37.249 "trtype": "TCP", 00:19:37.249 "adrfam": "IPv4", 00:19:37.249 "traddr": "10.0.0.1", 00:19:37.249 "trsvcid": "33228" 00:19:37.249 }, 00:19:37.249 "auth": { 00:19:37.249 "state": "completed", 00:19:37.249 "digest": "sha384", 00:19:37.249 "dhgroup": "null" 00:19:37.249 } 00:19:37.249 } 00:19:37.249 ]' 00:19:37.249 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.249 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.249 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.249 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:37.249 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.506 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.506 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.506 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.072 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:19:39.004 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.004 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:39.004 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.004 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.004 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.004 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.004 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.004 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.261 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:39.261 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.261 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.261 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.518 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.518 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.518 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:39.519 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.519 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.519 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.519 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.519 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.776 00:19:39.776 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.776 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.776 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.340 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.341 { 00:19:40.341 "cntlid": 55, 00:19:40.341 "qid": 0, 00:19:40.341 "state": "enabled", 00:19:40.341 "thread": "nvmf_tgt_poll_group_000", 00:19:40.341 "listen_address": { 00:19:40.341 "trtype": "TCP", 00:19:40.341 "adrfam": "IPv4", 00:19:40.341 "traddr": "10.0.0.2", 00:19:40.341 "trsvcid": "4420" 00:19:40.341 }, 00:19:40.341 "peer_address": { 00:19:40.341 "trtype": "TCP", 00:19:40.341 "adrfam": "IPv4", 00:19:40.341 "traddr": "10.0.0.1", 00:19:40.341 "trsvcid": "33248" 00:19:40.341 }, 00:19:40.341 "auth": { 00:19:40.341 "state": "completed", 00:19:40.341 "digest": "sha384", 00:19:40.341 "dhgroup": "null" 00:19:40.341 } 00:19:40.341 } 00:19:40.341 ]' 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.341 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.906 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.279 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.279 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.843 00:19:42.843 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.843 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.843 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.101 { 00:19:43.101 "cntlid": 57, 00:19:43.101 "qid": 0, 00:19:43.101 "state": "enabled", 00:19:43.101 "thread": "nvmf_tgt_poll_group_000", 00:19:43.101 "listen_address": { 00:19:43.101 "trtype": "TCP", 00:19:43.101 "adrfam": "IPv4", 00:19:43.101 "traddr": "10.0.0.2", 00:19:43.101 "trsvcid": "4420" 00:19:43.101 }, 00:19:43.101 "peer_address": { 00:19:43.101 "trtype": "TCP", 00:19:43.101 "adrfam": "IPv4", 00:19:43.101 "traddr": "10.0.0.1", 00:19:43.101 "trsvcid": "33606" 00:19:43.101 }, 00:19:43.101 "auth": { 00:19:43.101 "state": "completed", 00:19:43.101 "digest": "sha384", 00:19:43.101 "dhgroup": "ffdhe2048" 00:19:43.101 } 00:19:43.101 } 00:19:43.101 ]' 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.101 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.360 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.360 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.360 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.617 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:19:44.549 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.806 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:44.806 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.806 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.806 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.806 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.806 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.806 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.067 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.363 00:19:45.364 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.364 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.364 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.621 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.621 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.621 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.621 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.621 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.621 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.621 { 00:19:45.621 "cntlid": 59, 00:19:45.621 "qid": 0, 00:19:45.621 "state": "enabled", 00:19:45.621 "thread": "nvmf_tgt_poll_group_000", 00:19:45.621 "listen_address": { 00:19:45.621 "trtype": "TCP", 00:19:45.621 "adrfam": "IPv4", 00:19:45.621 "traddr": "10.0.0.2", 00:19:45.621 "trsvcid": "4420" 00:19:45.621 }, 00:19:45.621 "peer_address": { 00:19:45.621 "trtype": "TCP", 00:19:45.621 "adrfam": "IPv4", 00:19:45.621 "traddr": "10.0.0.1", 00:19:45.621 "trsvcid": "33630" 00:19:45.621 }, 00:19:45.621 "auth": { 00:19:45.621 "state": "completed", 00:19:45.621 "digest": "sha384", 00:19:45.621 "dhgroup": "ffdhe2048" 00:19:45.621 } 00:19:45.621 } 00:19:45.621 ]' 00:19:45.621 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.879 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.879 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.879 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.879 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.879 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.879 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.879 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.444 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:19:47.815 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.815 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:47.815 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.815 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.815 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.815 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.815 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.815 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.380 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.638 00:19:48.638 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.638 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.638 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.203 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.204 { 00:19:49.204 "cntlid": 61, 00:19:49.204 "qid": 0, 00:19:49.204 "state": "enabled", 00:19:49.204 "thread": "nvmf_tgt_poll_group_000", 00:19:49.204 "listen_address": { 00:19:49.204 "trtype": "TCP", 00:19:49.204 "adrfam": "IPv4", 00:19:49.204 "traddr": "10.0.0.2", 00:19:49.204 "trsvcid": "4420" 00:19:49.204 }, 00:19:49.204 "peer_address": { 00:19:49.204 "trtype": "TCP", 00:19:49.204 "adrfam": "IPv4", 00:19:49.204 "traddr": "10.0.0.1", 00:19:49.204 "trsvcid": "33664" 00:19:49.204 }, 00:19:49.204 "auth": { 00:19:49.204 "state": "completed", 00:19:49.204 "digest": "sha384", 00:19:49.204 "dhgroup": "ffdhe2048" 00:19:49.204 } 00:19:49.204 } 00:19:49.204 ]' 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.204 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.770 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:19:50.704 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.704 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:50.704 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.704 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.963 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.963 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.963 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.963 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.221 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.786 00:19:51.786 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.786 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.786 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.044 { 00:19:52.044 "cntlid": 63, 00:19:52.044 "qid": 0, 00:19:52.044 "state": "enabled", 00:19:52.044 "thread": "nvmf_tgt_poll_group_000", 00:19:52.044 "listen_address": { 00:19:52.044 "trtype": "TCP", 00:19:52.044 "adrfam": "IPv4", 00:19:52.044 "traddr": "10.0.0.2", 00:19:52.044 "trsvcid": "4420" 00:19:52.044 }, 00:19:52.044 "peer_address": { 00:19:52.044 "trtype": "TCP", 00:19:52.044 "adrfam": "IPv4", 00:19:52.044 "traddr": "10.0.0.1", 00:19:52.044 "trsvcid": "33706" 00:19:52.044 }, 00:19:52.044 "auth": { 00:19:52.044 "state": "completed", 00:19:52.044 "digest": "sha384", 00:19:52.044 "dhgroup": "ffdhe2048" 00:19:52.044 } 00:19:52.044 } 00:19:52.044 ]' 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.044 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.302 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.302 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.302 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.559 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.490 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.748 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.682 00:19:54.682 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.682 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.682 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.940 { 00:19:54.940 "cntlid": 65, 00:19:54.940 "qid": 0, 00:19:54.940 "state": "enabled", 00:19:54.940 "thread": "nvmf_tgt_poll_group_000", 00:19:54.940 "listen_address": { 00:19:54.940 "trtype": "TCP", 00:19:54.940 "adrfam": "IPv4", 00:19:54.940 "traddr": "10.0.0.2", 00:19:54.940 "trsvcid": "4420" 00:19:54.940 }, 00:19:54.940 "peer_address": { 00:19:54.940 "trtype": "TCP", 00:19:54.940 "adrfam": "IPv4", 00:19:54.940 "traddr": "10.0.0.1", 00:19:54.940 "trsvcid": "57370" 00:19:54.940 }, 00:19:54.940 "auth": { 00:19:54.940 "state": "completed", 00:19:54.940 "digest": "sha384", 00:19:54.940 "dhgroup": "ffdhe3072" 00:19:54.940 } 00:19:54.940 } 00:19:54.940 ]' 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.940 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.198 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.198 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.198 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.456 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.829 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.830 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.395 00:19:57.395 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.395 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.395 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.653 { 00:19:57.653 "cntlid": 67, 00:19:57.653 "qid": 0, 00:19:57.653 "state": "enabled", 00:19:57.653 "thread": "nvmf_tgt_poll_group_000", 00:19:57.653 "listen_address": { 00:19:57.653 "trtype": "TCP", 00:19:57.653 "adrfam": "IPv4", 00:19:57.653 "traddr": "10.0.0.2", 00:19:57.653 "trsvcid": "4420" 00:19:57.653 }, 00:19:57.653 "peer_address": { 00:19:57.653 "trtype": "TCP", 00:19:57.653 "adrfam": "IPv4", 00:19:57.653 "traddr": "10.0.0.1", 00:19:57.653 "trsvcid": "57408" 00:19:57.653 }, 00:19:57.653 "auth": { 00:19:57.653 "state": "completed", 00:19:57.653 "digest": "sha384", 00:19:57.653 "dhgroup": "ffdhe3072" 00:19:57.653 } 00:19:57.653 } 00:19:57.653 ]' 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.653 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.219 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:19:59.153 14:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.153 14:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:59.153 14:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.153 14:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.153 14:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.153 14:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.153 14:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.153 14:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.758 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:59.758 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.758 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.759 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.325 00:20:00.325 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.325 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.325 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.890 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.890 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.890 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.890 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.890 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.890 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.890 { 00:20:00.890 "cntlid": 69, 00:20:00.890 "qid": 0, 00:20:00.890 "state": "enabled", 00:20:00.890 "thread": "nvmf_tgt_poll_group_000", 00:20:00.890 "listen_address": { 00:20:00.890 "trtype": "TCP", 00:20:00.890 "adrfam": "IPv4", 00:20:00.890 "traddr": "10.0.0.2", 00:20:00.890 "trsvcid": "4420" 00:20:00.890 }, 00:20:00.890 "peer_address": { 00:20:00.890 "trtype": "TCP", 00:20:00.890 "adrfam": "IPv4", 00:20:00.890 "traddr": "10.0.0.1", 00:20:00.890 "trsvcid": "57432" 00:20:00.890 }, 00:20:00.890 "auth": { 00:20:00.890 "state": "completed", 00:20:00.890 "digest": "sha384", 00:20:00.890 "dhgroup": "ffdhe3072" 00:20:00.890 } 00:20:00.890 } 00:20:00.890 ]' 00:20:00.890 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.148 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.148 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.148 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.148 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.148 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.148 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.148 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.714 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:20:02.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:02.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.647 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.905 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.470 00:20:03.470 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.471 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.471 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.035 { 00:20:04.035 "cntlid": 71, 00:20:04.035 "qid": 0, 00:20:04.035 "state": "enabled", 00:20:04.035 "thread": "nvmf_tgt_poll_group_000", 00:20:04.035 "listen_address": { 00:20:04.035 "trtype": "TCP", 00:20:04.035 "adrfam": "IPv4", 00:20:04.035 "traddr": "10.0.0.2", 00:20:04.035 "trsvcid": "4420" 00:20:04.035 }, 00:20:04.035 "peer_address": { 00:20:04.035 "trtype": "TCP", 00:20:04.035 "adrfam": "IPv4", 00:20:04.035 "traddr": "10.0.0.1", 00:20:04.035 "trsvcid": "43676" 00:20:04.035 }, 00:20:04.035 "auth": { 00:20:04.035 "state": "completed", 00:20:04.035 "digest": "sha384", 00:20:04.035 "dhgroup": "ffdhe3072" 00:20:04.035 } 00:20:04.035 } 00:20:04.035 ]' 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.035 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.292 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.292 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.292 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.857 14:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.789 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.046 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.628 00:20:06.629 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.629 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.629 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.886 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.886 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.886 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.886 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.886 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.886 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.886 { 00:20:06.886 "cntlid": 73, 00:20:06.886 "qid": 0, 00:20:06.886 "state": "enabled", 00:20:06.886 "thread": "nvmf_tgt_poll_group_000", 00:20:06.886 "listen_address": { 00:20:06.886 "trtype": "TCP", 00:20:06.886 "adrfam": "IPv4", 00:20:06.886 "traddr": "10.0.0.2", 00:20:06.886 "trsvcid": "4420" 00:20:06.886 }, 00:20:06.886 "peer_address": { 00:20:06.886 "trtype": "TCP", 00:20:06.886 "adrfam": "IPv4", 00:20:06.886 "traddr": "10.0.0.1", 00:20:06.886 "trsvcid": "43692" 00:20:06.886 }, 00:20:06.886 "auth": { 00:20:06.886 "state": "completed", 00:20:06.886 "digest": "sha384", 00:20:06.886 "dhgroup": "ffdhe4096" 00:20:06.886 } 00:20:06.886 } 00:20:06.886 ]' 00:20:06.886 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.143 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.143 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.143 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.143 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.143 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.143 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.143 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.401 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:20:08.793 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.793 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:08.793 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.793 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.793 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.793 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.793 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.793 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.050 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.615 00:20:09.615 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.615 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.615 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.181 { 00:20:10.181 "cntlid": 75, 00:20:10.181 "qid": 0, 00:20:10.181 "state": "enabled", 00:20:10.181 "thread": "nvmf_tgt_poll_group_000", 00:20:10.181 "listen_address": { 00:20:10.181 "trtype": "TCP", 00:20:10.181 "adrfam": "IPv4", 00:20:10.181 "traddr": "10.0.0.2", 00:20:10.181 "trsvcid": "4420" 00:20:10.181 }, 00:20:10.181 "peer_address": { 00:20:10.181 "trtype": "TCP", 00:20:10.181 "adrfam": "IPv4", 00:20:10.181 "traddr": "10.0.0.1", 00:20:10.181 "trsvcid": "43704" 00:20:10.181 }, 00:20:10.181 "auth": { 00:20:10.181 "state": "completed", 00:20:10.181 "digest": "sha384", 00:20:10.181 "dhgroup": "ffdhe4096" 00:20:10.181 } 00:20:10.181 } 00:20:10.181 ]' 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.181 14:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.439 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:20:11.813 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.813 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:11.813 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.813 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.813 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.813 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.813 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.813 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.071 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.072 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.072 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.637 00:20:12.637 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.637 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.637 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.203 { 00:20:13.203 "cntlid": 77, 00:20:13.203 "qid": 0, 00:20:13.203 "state": "enabled", 00:20:13.203 "thread": "nvmf_tgt_poll_group_000", 00:20:13.203 "listen_address": { 00:20:13.203 "trtype": "TCP", 00:20:13.203 "adrfam": "IPv4", 00:20:13.203 "traddr": "10.0.0.2", 00:20:13.203 "trsvcid": "4420" 00:20:13.203 }, 00:20:13.203 "peer_address": { 00:20:13.203 "trtype": "TCP", 00:20:13.203 "adrfam": "IPv4", 00:20:13.203 "traddr": "10.0.0.1", 00:20:13.203 "trsvcid": "56948" 00:20:13.203 }, 00:20:13.203 "auth": { 00:20:13.203 "state": "completed", 00:20:13.203 "digest": "sha384", 00:20:13.203 "dhgroup": "ffdhe4096" 00:20:13.203 } 00:20:13.203 } 00:20:13.203 ]' 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.203 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.203 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.203 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.203 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.769 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:20:14.735 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.735 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:14.735 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.735 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.735 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.735 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.735 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.301 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.867 00:20:15.867 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.867 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.867 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.124 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.124 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.124 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.124 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.382 { 00:20:16.382 "cntlid": 79, 00:20:16.382 "qid": 0, 00:20:16.382 "state": "enabled", 00:20:16.382 "thread": "nvmf_tgt_poll_group_000", 00:20:16.382 "listen_address": { 00:20:16.382 "trtype": "TCP", 00:20:16.382 "adrfam": "IPv4", 00:20:16.382 "traddr": "10.0.0.2", 00:20:16.382 "trsvcid": "4420" 00:20:16.382 }, 00:20:16.382 "peer_address": { 00:20:16.382 "trtype": "TCP", 00:20:16.382 "adrfam": "IPv4", 00:20:16.382 "traddr": "10.0.0.1", 00:20:16.382 "trsvcid": "56970" 00:20:16.382 }, 00:20:16.382 "auth": { 00:20:16.382 "state": "completed", 00:20:16.382 "digest": "sha384", 00:20:16.382 "dhgroup": "ffdhe4096" 00:20:16.382 } 00:20:16.382 } 00:20:16.382 ]' 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.382 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.947 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:17.892 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.151 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.084 00:20:19.084 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.084 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.084 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.341 { 00:20:19.341 "cntlid": 81, 00:20:19.341 "qid": 0, 00:20:19.341 "state": "enabled", 00:20:19.341 "thread": "nvmf_tgt_poll_group_000", 00:20:19.341 "listen_address": { 00:20:19.341 "trtype": "TCP", 00:20:19.341 "adrfam": "IPv4", 00:20:19.341 "traddr": "10.0.0.2", 00:20:19.341 "trsvcid": "4420" 00:20:19.341 }, 00:20:19.341 "peer_address": { 00:20:19.341 "trtype": "TCP", 00:20:19.341 "adrfam": "IPv4", 00:20:19.341 "traddr": "10.0.0.1", 00:20:19.341 "trsvcid": "57004" 00:20:19.341 }, 00:20:19.341 "auth": { 00:20:19.341 "state": "completed", 00:20:19.341 "digest": "sha384", 00:20:19.341 "dhgroup": "ffdhe6144" 00:20:19.341 } 00:20:19.341 } 00:20:19.341 ]' 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.341 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.599 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.599 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.599 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.599 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.599 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.166 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:20:21.099 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.099 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:21.099 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.099 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.099 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.099 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.099 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:21.099 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.665 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.231 00:20:22.231 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.231 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.231 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.797 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.797 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.797 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.797 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.797 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.797 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.797 { 00:20:22.797 "cntlid": 83, 00:20:22.797 "qid": 0, 00:20:22.797 "state": "enabled", 00:20:22.797 "thread": "nvmf_tgt_poll_group_000", 00:20:22.797 "listen_address": { 00:20:22.797 "trtype": "TCP", 00:20:22.797 "adrfam": "IPv4", 00:20:22.797 "traddr": "10.0.0.2", 00:20:22.797 "trsvcid": "4420" 00:20:22.797 }, 00:20:22.797 "peer_address": { 00:20:22.797 "trtype": "TCP", 00:20:22.797 "adrfam": "IPv4", 00:20:22.797 "traddr": "10.0.0.1", 00:20:22.797 "trsvcid": "57046" 00:20:22.797 }, 00:20:22.797 "auth": { 00:20:22.797 "state": "completed", 00:20:22.797 "digest": "sha384", 00:20:22.797 "dhgroup": "ffdhe6144" 00:20:22.797 } 00:20:22.797 } 00:20:22.797 ]' 00:20:22.797 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.055 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.055 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.055 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.055 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.055 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.055 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.055 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.313 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:20:24.686 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.686 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:24.686 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.686 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.686 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.686 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.686 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.686 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.944 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.509 00:20:25.509 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.509 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.509 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.766 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.766 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.766 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.766 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.766 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.766 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.766 { 00:20:25.766 "cntlid": 85, 00:20:25.766 "qid": 0, 00:20:25.766 "state": "enabled", 00:20:25.766 "thread": "nvmf_tgt_poll_group_000", 00:20:25.766 "listen_address": { 00:20:25.766 "trtype": "TCP", 00:20:25.767 "adrfam": "IPv4", 00:20:25.767 "traddr": "10.0.0.2", 00:20:25.767 "trsvcid": "4420" 00:20:25.767 }, 00:20:25.767 "peer_address": { 00:20:25.767 "trtype": "TCP", 00:20:25.767 "adrfam": "IPv4", 00:20:25.767 "traddr": "10.0.0.1", 00:20:25.767 "trsvcid": "55166" 00:20:25.767 }, 00:20:25.767 "auth": { 00:20:25.767 "state": "completed", 00:20:25.767 "digest": "sha384", 00:20:25.767 "dhgroup": "ffdhe6144" 00:20:25.767 } 00:20:25.767 } 00:20:25.767 ]' 00:20:25.767 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.767 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.767 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.024 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.024 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.024 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.024 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.024 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.282 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:20:27.652 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.652 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:27.652 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.652 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.652 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.652 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.652 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.652 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.909 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.497 00:20:28.497 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.497 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.497 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.074 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.074 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.074 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.074 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.074 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.074 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.074 { 00:20:29.074 "cntlid": 87, 00:20:29.074 "qid": 0, 00:20:29.074 "state": "enabled", 00:20:29.074 "thread": "nvmf_tgt_poll_group_000", 00:20:29.074 "listen_address": { 00:20:29.075 "trtype": "TCP", 00:20:29.075 "adrfam": "IPv4", 00:20:29.075 "traddr": "10.0.0.2", 00:20:29.075 "trsvcid": "4420" 00:20:29.075 }, 00:20:29.075 "peer_address": { 00:20:29.075 "trtype": "TCP", 00:20:29.075 "adrfam": "IPv4", 00:20:29.075 "traddr": "10.0.0.1", 00:20:29.075 "trsvcid": "55198" 00:20:29.075 }, 00:20:29.075 "auth": { 00:20:29.075 "state": "completed", 00:20:29.075 "digest": "sha384", 00:20:29.075 "dhgroup": "ffdhe6144" 00:20:29.075 } 00:20:29.075 } 00:20:29.075 ]' 00:20:29.075 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.075 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.075 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.332 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.332 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.332 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.332 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.332 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.898 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:30.831 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.089 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.463 00:20:32.463 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.463 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.463 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.721 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.721 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.721 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.721 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.721 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.721 { 00:20:32.721 "cntlid": 89, 00:20:32.721 "qid": 0, 00:20:32.721 "state": "enabled", 00:20:32.721 "thread": "nvmf_tgt_poll_group_000", 00:20:32.721 "listen_address": { 00:20:32.721 "trtype": "TCP", 00:20:32.721 "adrfam": "IPv4", 00:20:32.721 "traddr": "10.0.0.2", 00:20:32.721 "trsvcid": "4420" 00:20:32.721 }, 00:20:32.721 "peer_address": { 00:20:32.721 "trtype": "TCP", 00:20:32.721 "adrfam": "IPv4", 00:20:32.721 "traddr": "10.0.0.1", 00:20:32.721 "trsvcid": "55236" 00:20:32.721 }, 00:20:32.721 "auth": { 00:20:32.721 "state": "completed", 00:20:32.721 "digest": "sha384", 00:20:32.721 "dhgroup": "ffdhe8192" 00:20:32.721 } 00:20:32.721 } 00:20:32.721 ]' 00:20:32.721 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.979 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.979 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.979 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.979 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.979 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.979 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.979 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.237 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:20:34.611 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.611 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:34.611 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.611 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.611 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.611 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.611 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.611 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.869 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.803 00:20:35.803 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.803 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.803 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.061 { 00:20:36.061 "cntlid": 91, 00:20:36.061 "qid": 0, 00:20:36.061 "state": "enabled", 00:20:36.061 "thread": "nvmf_tgt_poll_group_000", 00:20:36.061 "listen_address": { 00:20:36.061 "trtype": "TCP", 00:20:36.061 "adrfam": "IPv4", 00:20:36.061 "traddr": "10.0.0.2", 00:20:36.061 "trsvcid": "4420" 00:20:36.061 }, 00:20:36.061 "peer_address": { 00:20:36.061 "trtype": "TCP", 00:20:36.061 "adrfam": "IPv4", 00:20:36.061 "traddr": "10.0.0.1", 00:20:36.061 "trsvcid": "49436" 00:20:36.061 }, 00:20:36.061 "auth": { 00:20:36.061 "state": "completed", 00:20:36.061 "digest": "sha384", 00:20:36.061 "dhgroup": "ffdhe8192" 00:20:36.061 } 00:20:36.061 } 00:20:36.061 ]' 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.061 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.627 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:20:37.999 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.999 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:37.999 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.999 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.999 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.999 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.999 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.999 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.256 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.186 00:20:39.186 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.186 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.186 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.443 { 00:20:39.443 "cntlid": 93, 00:20:39.443 "qid": 0, 00:20:39.443 "state": "enabled", 00:20:39.443 "thread": "nvmf_tgt_poll_group_000", 00:20:39.443 "listen_address": { 00:20:39.443 "trtype": "TCP", 00:20:39.443 "adrfam": "IPv4", 00:20:39.443 "traddr": "10.0.0.2", 00:20:39.443 "trsvcid": "4420" 00:20:39.443 }, 00:20:39.443 "peer_address": { 00:20:39.443 "trtype": "TCP", 00:20:39.443 "adrfam": "IPv4", 00:20:39.443 "traddr": "10.0.0.1", 00:20:39.443 "trsvcid": "49464" 00:20:39.443 }, 00:20:39.443 "auth": { 00:20:39.443 "state": "completed", 00:20:39.443 "digest": "sha384", 00:20:39.443 "dhgroup": "ffdhe8192" 00:20:39.443 } 00:20:39.443 } 00:20:39.443 ]' 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.443 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.700 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.700 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.700 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.960 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:20:41.331 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.331 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:41.331 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.331 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.331 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.331 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.331 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.331 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.590 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.522 00:20:42.522 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.522 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.522 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.087 { 00:20:43.087 "cntlid": 95, 00:20:43.087 "qid": 0, 00:20:43.087 "state": "enabled", 00:20:43.087 "thread": "nvmf_tgt_poll_group_000", 00:20:43.087 "listen_address": { 00:20:43.087 "trtype": "TCP", 00:20:43.087 "adrfam": "IPv4", 00:20:43.087 "traddr": "10.0.0.2", 00:20:43.087 "trsvcid": "4420" 00:20:43.087 }, 00:20:43.087 "peer_address": { 00:20:43.087 "trtype": "TCP", 00:20:43.087 "adrfam": "IPv4", 00:20:43.087 "traddr": "10.0.0.1", 00:20:43.087 "trsvcid": "49492" 00:20:43.087 }, 00:20:43.087 "auth": { 00:20:43.087 "state": "completed", 00:20:43.087 "digest": "sha384", 00:20:43.087 "dhgroup": "ffdhe8192" 00:20:43.087 } 00:20:43.087 } 00:20:43.087 ]' 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.087 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.350 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.350 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.350 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.350 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.350 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.653 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.026 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.284 00:20:45.284 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.284 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.284 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.849 { 00:20:45.849 "cntlid": 97, 00:20:45.849 "qid": 0, 00:20:45.849 "state": "enabled", 00:20:45.849 "thread": "nvmf_tgt_poll_group_000", 00:20:45.849 "listen_address": { 00:20:45.849 "trtype": "TCP", 00:20:45.849 "adrfam": "IPv4", 00:20:45.849 "traddr": "10.0.0.2", 00:20:45.849 "trsvcid": "4420" 00:20:45.849 }, 00:20:45.849 "peer_address": { 00:20:45.849 "trtype": "TCP", 00:20:45.849 "adrfam": "IPv4", 00:20:45.849 "traddr": "10.0.0.1", 00:20:45.849 "trsvcid": "45326" 00:20:45.849 }, 00:20:45.849 "auth": { 00:20:45.849 "state": "completed", 00:20:45.849 "digest": "sha512", 00:20:45.849 "dhgroup": "null" 00:20:45.849 } 00:20:45.849 } 00:20:45.849 ]' 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.849 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.415 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.788 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.354 00:20:48.354 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.354 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.354 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.921 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.921 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.921 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.921 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.921 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.921 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.921 { 00:20:48.922 "cntlid": 99, 00:20:48.922 "qid": 0, 00:20:48.922 "state": "enabled", 00:20:48.922 "thread": "nvmf_tgt_poll_group_000", 00:20:48.922 "listen_address": { 00:20:48.922 "trtype": "TCP", 00:20:48.922 "adrfam": "IPv4", 00:20:48.922 "traddr": "10.0.0.2", 00:20:48.922 "trsvcid": "4420" 00:20:48.922 }, 00:20:48.922 "peer_address": { 00:20:48.922 "trtype": "TCP", 00:20:48.922 "adrfam": "IPv4", 00:20:48.922 "traddr": "10.0.0.1", 00:20:48.922 "trsvcid": "45362" 00:20:48.922 }, 00:20:48.922 "auth": { 00:20:48.922 "state": "completed", 00:20:48.922 "digest": "sha512", 00:20:48.922 "dhgroup": "null" 00:20:48.922 } 00:20:48.922 } 00:20:48.922 ]' 00:20:48.922 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.922 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.922 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.922 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:48.922 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.922 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.922 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.922 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.488 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.861 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.426 00:20:51.426 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.427 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.427 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.684 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.684 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.684 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.684 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.684 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.684 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.684 { 00:20:51.684 "cntlid": 101, 00:20:51.684 "qid": 0, 00:20:51.684 "state": "enabled", 00:20:51.684 "thread": "nvmf_tgt_poll_group_000", 00:20:51.684 "listen_address": { 00:20:51.684 "trtype": "TCP", 00:20:51.684 "adrfam": "IPv4", 00:20:51.684 "traddr": "10.0.0.2", 00:20:51.684 "trsvcid": "4420" 00:20:51.684 }, 00:20:51.684 "peer_address": { 00:20:51.684 "trtype": "TCP", 00:20:51.684 "adrfam": "IPv4", 00:20:51.684 "traddr": "10.0.0.1", 00:20:51.684 "trsvcid": "45392" 00:20:51.684 }, 00:20:51.684 "auth": { 00:20:51.684 "state": "completed", 00:20:51.684 "digest": "sha512", 00:20:51.684 "dhgroup": "null" 00:20:51.684 } 00:20:51.684 } 00:20:51.684 ]' 00:20:51.684 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.685 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.685 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.685 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:51.685 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.685 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.685 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.685 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.251 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:20:53.185 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.185 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:53.185 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.185 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.185 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.185 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.185 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:53.185 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.751 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.009 00:20:54.009 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.009 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.009 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.267 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.267 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.267 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.267 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.267 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.267 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.267 { 00:20:54.267 "cntlid": 103, 00:20:54.267 "qid": 0, 00:20:54.267 "state": "enabled", 00:20:54.267 "thread": "nvmf_tgt_poll_group_000", 00:20:54.267 "listen_address": { 00:20:54.267 "trtype": "TCP", 00:20:54.267 "adrfam": "IPv4", 00:20:54.267 "traddr": "10.0.0.2", 00:20:54.267 "trsvcid": "4420" 00:20:54.267 }, 00:20:54.267 "peer_address": { 00:20:54.267 "trtype": "TCP", 00:20:54.267 "adrfam": "IPv4", 00:20:54.267 "traddr": "10.0.0.1", 00:20:54.267 "trsvcid": "37360" 00:20:54.267 }, 00:20:54.267 "auth": { 00:20:54.267 "state": "completed", 00:20:54.267 "digest": "sha512", 00:20:54.267 "dhgroup": "null" 00:20:54.267 } 00:20:54.267 } 00:20:54.267 ]' 00:20:54.267 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.525 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.525 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.525 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:54.525 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.525 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.525 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.525 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.090 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.464 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.464 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.030 00:20:57.030 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.030 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.030 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.288 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.288 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.288 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.288 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.288 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.288 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.288 { 00:20:57.288 "cntlid": 105, 00:20:57.288 "qid": 0, 00:20:57.288 "state": "enabled", 00:20:57.288 "thread": "nvmf_tgt_poll_group_000", 00:20:57.288 "listen_address": { 00:20:57.288 "trtype": "TCP", 00:20:57.288 "adrfam": "IPv4", 00:20:57.288 "traddr": "10.0.0.2", 00:20:57.288 "trsvcid": "4420" 00:20:57.288 }, 00:20:57.288 "peer_address": { 00:20:57.288 "trtype": "TCP", 00:20:57.288 "adrfam": "IPv4", 00:20:57.288 "traddr": "10.0.0.1", 00:20:57.288 "trsvcid": "37382" 00:20:57.288 }, 00:20:57.288 "auth": { 00:20:57.288 "state": "completed", 00:20:57.288 "digest": "sha512", 00:20:57.288 "dhgroup": "ffdhe2048" 00:20:57.288 } 00:20:57.288 } 00:20:57.288 ]' 00:20:57.288 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.544 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.545 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.545 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.545 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.545 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.545 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.545 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.802 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:20:59.204 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.204 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:59.204 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.204 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.204 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.204 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.204 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.204 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.462 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.027 00:21:00.027 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.027 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.027 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.594 { 00:21:00.594 "cntlid": 107, 00:21:00.594 "qid": 0, 00:21:00.594 "state": "enabled", 00:21:00.594 "thread": "nvmf_tgt_poll_group_000", 00:21:00.594 "listen_address": { 00:21:00.594 "trtype": "TCP", 00:21:00.594 "adrfam": "IPv4", 00:21:00.594 "traddr": "10.0.0.2", 00:21:00.594 "trsvcid": "4420" 00:21:00.594 }, 00:21:00.594 "peer_address": { 00:21:00.594 "trtype": "TCP", 00:21:00.594 "adrfam": "IPv4", 00:21:00.594 "traddr": "10.0.0.1", 00:21:00.594 "trsvcid": "37416" 00:21:00.594 }, 00:21:00.594 "auth": { 00:21:00.594 "state": "completed", 00:21:00.594 "digest": "sha512", 00:21:00.594 "dhgroup": "ffdhe2048" 00:21:00.594 } 00:21:00.594 } 00:21:00.594 ]' 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.594 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.160 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:21:02.093 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.093 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:02.093 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.093 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.093 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.093 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.093 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.093 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.659 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.917 00:21:02.917 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.917 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.917 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.176 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.176 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.176 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.176 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.176 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.176 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.176 { 00:21:03.176 "cntlid": 109, 00:21:03.176 "qid": 0, 00:21:03.176 "state": "enabled", 00:21:03.176 "thread": "nvmf_tgt_poll_group_000", 00:21:03.176 "listen_address": { 00:21:03.176 "trtype": "TCP", 00:21:03.176 "adrfam": "IPv4", 00:21:03.176 "traddr": "10.0.0.2", 00:21:03.176 "trsvcid": "4420" 00:21:03.176 }, 00:21:03.176 "peer_address": { 00:21:03.176 "trtype": "TCP", 00:21:03.176 "adrfam": "IPv4", 00:21:03.176 "traddr": "10.0.0.1", 00:21:03.176 "trsvcid": "46508" 00:21:03.176 }, 00:21:03.176 "auth": { 00:21:03.176 "state": "completed", 00:21:03.176 "digest": "sha512", 00:21:03.176 "dhgroup": "ffdhe2048" 00:21:03.176 } 00:21:03.176 } 00:21:03.176 ]' 00:21:03.176 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.434 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.434 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.434 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.434 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.434 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.434 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.434 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.000 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:21:05.373 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.373 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:05.373 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.373 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.373 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.373 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.373 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.373 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.631 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.889 00:21:05.889 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.889 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.889 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.455 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.455 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.455 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.455 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.455 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.455 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.455 { 00:21:06.455 "cntlid": 111, 00:21:06.455 "qid": 0, 00:21:06.455 "state": "enabled", 00:21:06.455 "thread": "nvmf_tgt_poll_group_000", 00:21:06.455 "listen_address": { 00:21:06.455 "trtype": "TCP", 00:21:06.455 "adrfam": "IPv4", 00:21:06.455 "traddr": "10.0.0.2", 00:21:06.455 "trsvcid": "4420" 00:21:06.455 }, 00:21:06.455 "peer_address": { 00:21:06.455 "trtype": "TCP", 00:21:06.455 "adrfam": "IPv4", 00:21:06.455 "traddr": "10.0.0.1", 00:21:06.455 "trsvcid": "46522" 00:21:06.456 }, 00:21:06.456 "auth": { 00:21:06.456 "state": "completed", 00:21:06.456 "digest": "sha512", 00:21:06.456 "dhgroup": "ffdhe2048" 00:21:06.456 } 00:21:06.456 } 00:21:06.456 ]' 00:21:06.456 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.456 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.456 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.456 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.456 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.456 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.456 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.456 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.021 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.394 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.651 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:08.651 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.651 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.651 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:08.651 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:08.651 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.651 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.652 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.652 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.652 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.652 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.652 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.909 00:21:09.167 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.167 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.167 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.425 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.425 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.425 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.425 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.682 { 00:21:09.682 "cntlid": 113, 00:21:09.682 "qid": 0, 00:21:09.682 "state": "enabled", 00:21:09.682 "thread": "nvmf_tgt_poll_group_000", 00:21:09.682 "listen_address": { 00:21:09.682 "trtype": "TCP", 00:21:09.682 "adrfam": "IPv4", 00:21:09.682 "traddr": "10.0.0.2", 00:21:09.682 "trsvcid": "4420" 00:21:09.682 }, 00:21:09.682 "peer_address": { 00:21:09.682 "trtype": "TCP", 00:21:09.682 "adrfam": "IPv4", 00:21:09.682 "traddr": "10.0.0.1", 00:21:09.682 "trsvcid": "46532" 00:21:09.682 }, 00:21:09.682 "auth": { 00:21:09.682 "state": "completed", 00:21:09.682 "digest": "sha512", 00:21:09.682 "dhgroup": "ffdhe3072" 00:21:09.682 } 00:21:09.682 } 00:21:09.682 ]' 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.682 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.940 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:21:11.313 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.313 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:11.313 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.313 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.313 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.313 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.313 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.313 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.571 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.137 00:21:12.137 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.137 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.137 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.704 { 00:21:12.704 "cntlid": 115, 00:21:12.704 "qid": 0, 00:21:12.704 "state": "enabled", 00:21:12.704 "thread": "nvmf_tgt_poll_group_000", 00:21:12.704 "listen_address": { 00:21:12.704 "trtype": "TCP", 00:21:12.704 "adrfam": "IPv4", 00:21:12.704 "traddr": "10.0.0.2", 00:21:12.704 "trsvcid": "4420" 00:21:12.704 }, 00:21:12.704 "peer_address": { 00:21:12.704 "trtype": "TCP", 00:21:12.704 "adrfam": "IPv4", 00:21:12.704 "traddr": "10.0.0.1", 00:21:12.704 "trsvcid": "46566" 00:21:12.704 }, 00:21:12.704 "auth": { 00:21:12.704 "state": "completed", 00:21:12.704 "digest": "sha512", 00:21:12.704 "dhgroup": "ffdhe3072" 00:21:12.704 } 00:21:12.704 } 00:21:12.704 ]' 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.704 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.998 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.998 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.998 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.998 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.998 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.256 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.628 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.191 00:21:15.191 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.191 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.191 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.447 { 00:21:15.447 "cntlid": 117, 00:21:15.447 "qid": 0, 00:21:15.447 "state": "enabled", 00:21:15.447 "thread": "nvmf_tgt_poll_group_000", 00:21:15.447 "listen_address": { 00:21:15.447 "trtype": "TCP", 00:21:15.447 "adrfam": "IPv4", 00:21:15.447 "traddr": "10.0.0.2", 00:21:15.447 "trsvcid": "4420" 00:21:15.447 }, 00:21:15.447 "peer_address": { 00:21:15.447 "trtype": "TCP", 00:21:15.447 "adrfam": "IPv4", 00:21:15.447 "traddr": "10.0.0.1", 00:21:15.447 "trsvcid": "55306" 00:21:15.447 }, 00:21:15.447 "auth": { 00:21:15.447 "state": "completed", 00:21:15.447 "digest": "sha512", 00:21:15.447 "dhgroup": "ffdhe3072" 00:21:15.447 } 00:21:15.447 } 00:21:15.447 ]' 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.447 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.704 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.704 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.704 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.704 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.704 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.961 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:21:17.334 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.334 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:17.334 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.334 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.334 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.334 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.334 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.334 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.591 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.156 00:21:18.156 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.156 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.156 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.722 { 00:21:18.722 "cntlid": 119, 00:21:18.722 "qid": 0, 00:21:18.722 "state": "enabled", 00:21:18.722 "thread": "nvmf_tgt_poll_group_000", 00:21:18.722 "listen_address": { 00:21:18.722 "trtype": "TCP", 00:21:18.722 "adrfam": "IPv4", 00:21:18.722 "traddr": "10.0.0.2", 00:21:18.722 "trsvcid": "4420" 00:21:18.722 }, 00:21:18.722 "peer_address": { 00:21:18.722 "trtype": "TCP", 00:21:18.722 "adrfam": "IPv4", 00:21:18.722 "traddr": "10.0.0.1", 00:21:18.722 "trsvcid": "55338" 00:21:18.722 }, 00:21:18.722 "auth": { 00:21:18.722 "state": "completed", 00:21:18.722 "digest": "sha512", 00:21:18.722 "dhgroup": "ffdhe3072" 00:21:18.722 } 00:21:18.722 } 00:21:18.722 ]' 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.722 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.287 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.660 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.244 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.809 00:21:21.809 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.809 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.809 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.067 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.067 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.067 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.067 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.067 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.067 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.067 { 00:21:22.067 "cntlid": 121, 00:21:22.067 "qid": 0, 00:21:22.067 "state": "enabled", 00:21:22.067 "thread": "nvmf_tgt_poll_group_000", 00:21:22.067 "listen_address": { 00:21:22.067 "trtype": "TCP", 00:21:22.067 "adrfam": "IPv4", 00:21:22.067 "traddr": "10.0.0.2", 00:21:22.067 "trsvcid": "4420" 00:21:22.067 }, 00:21:22.067 "peer_address": { 00:21:22.067 "trtype": "TCP", 00:21:22.067 "adrfam": "IPv4", 00:21:22.067 "traddr": "10.0.0.1", 00:21:22.067 "trsvcid": "55372" 00:21:22.067 }, 00:21:22.067 "auth": { 00:21:22.067 "state": "completed", 00:21:22.067 "digest": "sha512", 00:21:22.067 "dhgroup": "ffdhe4096" 00:21:22.067 } 00:21:22.067 } 00:21:22.067 ]' 00:21:22.067 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.325 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.325 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.325 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.325 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.325 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.325 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.325 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.890 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:21:24.262 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.262 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:24.262 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.262 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.262 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.262 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.262 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:24.262 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.262 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.828 00:21:25.086 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.086 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.086 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.344 { 00:21:25.344 "cntlid": 123, 00:21:25.344 "qid": 0, 00:21:25.344 "state": "enabled", 00:21:25.344 "thread": "nvmf_tgt_poll_group_000", 00:21:25.344 "listen_address": { 00:21:25.344 "trtype": "TCP", 00:21:25.344 "adrfam": "IPv4", 00:21:25.344 "traddr": "10.0.0.2", 00:21:25.344 "trsvcid": "4420" 00:21:25.344 }, 00:21:25.344 "peer_address": { 00:21:25.344 "trtype": "TCP", 00:21:25.344 "adrfam": "IPv4", 00:21:25.344 "traddr": "10.0.0.1", 00:21:25.344 "trsvcid": "38506" 00:21:25.344 }, 00:21:25.344 "auth": { 00:21:25.344 "state": "completed", 00:21:25.344 "digest": "sha512", 00:21:25.344 "dhgroup": "ffdhe4096" 00:21:25.344 } 00:21:25.344 } 00:21:25.344 ]' 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.344 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.910 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:21:27.303 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.303 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:27.303 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.303 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.303 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.303 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.303 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.303 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.561 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.861 00:21:27.861 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.861 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.861 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.119 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.119 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.119 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.119 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.119 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.119 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.119 { 00:21:28.119 "cntlid": 125, 00:21:28.119 "qid": 0, 00:21:28.119 "state": "enabled", 00:21:28.119 "thread": "nvmf_tgt_poll_group_000", 00:21:28.119 "listen_address": { 00:21:28.119 "trtype": "TCP", 00:21:28.119 "adrfam": "IPv4", 00:21:28.119 "traddr": "10.0.0.2", 00:21:28.119 "trsvcid": "4420" 00:21:28.119 }, 00:21:28.119 "peer_address": { 00:21:28.119 "trtype": "TCP", 00:21:28.119 "adrfam": "IPv4", 00:21:28.119 "traddr": "10.0.0.1", 00:21:28.119 "trsvcid": "38532" 00:21:28.119 }, 00:21:28.119 "auth": { 00:21:28.119 "state": "completed", 00:21:28.119 "digest": "sha512", 00:21:28.119 "dhgroup": "ffdhe4096" 00:21:28.119 } 00:21:28.119 } 00:21:28.119 ]' 00:21:28.119 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.377 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.377 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.377 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:28.377 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.377 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.377 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.377 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.943 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:21:29.876 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.876 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:29.876 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.876 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.876 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.876 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.876 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:29.876 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.448 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.449 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.017 00:21:31.274 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.274 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.274 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.532 { 00:21:31.532 "cntlid": 127, 00:21:31.532 "qid": 0, 00:21:31.532 "state": "enabled", 00:21:31.532 "thread": "nvmf_tgt_poll_group_000", 00:21:31.532 "listen_address": { 00:21:31.532 "trtype": "TCP", 00:21:31.532 "adrfam": "IPv4", 00:21:31.532 "traddr": "10.0.0.2", 00:21:31.532 "trsvcid": "4420" 00:21:31.532 }, 00:21:31.532 "peer_address": { 00:21:31.532 "trtype": "TCP", 00:21:31.532 "adrfam": "IPv4", 00:21:31.532 "traddr": "10.0.0.1", 00:21:31.532 "trsvcid": "38556" 00:21:31.532 }, 00:21:31.532 "auth": { 00:21:31.532 "state": "completed", 00:21:31.532 "digest": "sha512", 00:21:31.532 "dhgroup": "ffdhe4096" 00:21:31.532 } 00:21:31.532 } 00:21:31.532 ]' 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.532 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.466 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.401 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.665 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.230 00:21:34.230 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.230 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.230 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.795 { 00:21:34.795 "cntlid": 129, 00:21:34.795 "qid": 0, 00:21:34.795 "state": "enabled", 00:21:34.795 "thread": "nvmf_tgt_poll_group_000", 00:21:34.795 "listen_address": { 00:21:34.795 "trtype": "TCP", 00:21:34.795 "adrfam": "IPv4", 00:21:34.795 "traddr": "10.0.0.2", 00:21:34.795 "trsvcid": "4420" 00:21:34.795 }, 00:21:34.795 "peer_address": { 00:21:34.795 "trtype": "TCP", 00:21:34.795 "adrfam": "IPv4", 00:21:34.795 "traddr": "10.0.0.1", 00:21:34.795 "trsvcid": "53374" 00:21:34.795 }, 00:21:34.795 "auth": { 00:21:34.795 "state": "completed", 00:21:34.795 "digest": "sha512", 00:21:34.795 "dhgroup": "ffdhe6144" 00:21:34.795 } 00:21:34.795 } 00:21:34.795 ]' 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.795 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.052 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:21:36.422 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.422 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:36.422 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.422 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.422 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.422 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.422 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.422 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.422 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:36.422 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.423 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.354 00:21:37.354 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.354 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.354 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.920 { 00:21:37.920 "cntlid": 131, 00:21:37.920 "qid": 0, 00:21:37.920 "state": "enabled", 00:21:37.920 "thread": "nvmf_tgt_poll_group_000", 00:21:37.920 "listen_address": { 00:21:37.920 "trtype": "TCP", 00:21:37.920 "adrfam": "IPv4", 00:21:37.920 "traddr": "10.0.0.2", 00:21:37.920 "trsvcid": "4420" 00:21:37.920 }, 00:21:37.920 "peer_address": { 00:21:37.920 "trtype": "TCP", 00:21:37.920 "adrfam": "IPv4", 00:21:37.920 "traddr": "10.0.0.1", 00:21:37.920 "trsvcid": "53394" 00:21:37.920 }, 00:21:37.920 "auth": { 00:21:37.920 "state": "completed", 00:21:37.920 "digest": "sha512", 00:21:37.920 "dhgroup": "ffdhe6144" 00:21:37.920 } 00:21:37.920 } 00:21:37.920 ]' 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.920 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.485 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:21:39.418 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.418 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:39.418 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.418 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.418 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.418 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.418 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.418 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.984 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.550 00:21:40.550 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.550 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.550 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.116 { 00:21:41.116 "cntlid": 133, 00:21:41.116 "qid": 0, 00:21:41.116 "state": "enabled", 00:21:41.116 "thread": "nvmf_tgt_poll_group_000", 00:21:41.116 "listen_address": { 00:21:41.116 "trtype": "TCP", 00:21:41.116 "adrfam": "IPv4", 00:21:41.116 "traddr": "10.0.0.2", 00:21:41.116 "trsvcid": "4420" 00:21:41.116 }, 00:21:41.116 "peer_address": { 00:21:41.116 "trtype": "TCP", 00:21:41.116 "adrfam": "IPv4", 00:21:41.116 "traddr": "10.0.0.1", 00:21:41.116 "trsvcid": "53424" 00:21:41.116 }, 00:21:41.116 "auth": { 00:21:41.116 "state": "completed", 00:21:41.116 "digest": "sha512", 00:21:41.116 "dhgroup": "ffdhe6144" 00:21:41.116 } 00:21:41.116 } 00:21:41.116 ]' 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.116 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.682 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:21:42.710 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.710 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:42.710 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.710 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.710 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.710 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.710 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.710 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.968 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.902 00:21:43.902 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.902 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.902 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.467 { 00:21:44.467 "cntlid": 135, 00:21:44.467 "qid": 0, 00:21:44.467 "state": "enabled", 00:21:44.467 "thread": "nvmf_tgt_poll_group_000", 00:21:44.467 "listen_address": { 00:21:44.467 "trtype": "TCP", 00:21:44.467 "adrfam": "IPv4", 00:21:44.467 "traddr": "10.0.0.2", 00:21:44.467 "trsvcid": "4420" 00:21:44.467 }, 00:21:44.467 "peer_address": { 00:21:44.467 "trtype": "TCP", 00:21:44.467 "adrfam": "IPv4", 00:21:44.467 "traddr": "10.0.0.1", 00:21:44.467 "trsvcid": "53220" 00:21:44.467 }, 00:21:44.467 "auth": { 00:21:44.467 "state": "completed", 00:21:44.467 "digest": "sha512", 00:21:44.467 "dhgroup": "ffdhe6144" 00:21:44.467 } 00:21:44.467 } 00:21:44.467 ]' 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.467 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.034 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.968 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.226 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.484 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.484 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.484 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.416 00:21:47.416 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.416 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.416 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.674 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.674 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.674 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.674 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.931 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.931 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.931 { 00:21:47.931 "cntlid": 137, 00:21:47.931 "qid": 0, 00:21:47.931 "state": "enabled", 00:21:47.931 "thread": "nvmf_tgt_poll_group_000", 00:21:47.931 "listen_address": { 00:21:47.931 "trtype": "TCP", 00:21:47.932 "adrfam": "IPv4", 00:21:47.932 "traddr": "10.0.0.2", 00:21:47.932 "trsvcid": "4420" 00:21:47.932 }, 00:21:47.932 "peer_address": { 00:21:47.932 "trtype": "TCP", 00:21:47.932 "adrfam": "IPv4", 00:21:47.932 "traddr": "10.0.0.1", 00:21:47.932 "trsvcid": "53248" 00:21:47.932 }, 00:21:47.932 "auth": { 00:21:47.932 "state": "completed", 00:21:47.932 "digest": "sha512", 00:21:47.932 "dhgroup": "ffdhe8192" 00:21:47.932 } 00:21:47.932 } 00:21:47.932 ]' 00:21:47.932 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.932 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.932 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.932 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.932 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.932 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.932 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.932 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.497 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:21:49.428 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.428 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:49.428 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.428 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.428 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.428 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.428 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.428 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.685 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.057 00:21:51.057 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.057 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.057 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.315 { 00:21:51.315 "cntlid": 139, 00:21:51.315 "qid": 0, 00:21:51.315 "state": "enabled", 00:21:51.315 "thread": "nvmf_tgt_poll_group_000", 00:21:51.315 "listen_address": { 00:21:51.315 "trtype": "TCP", 00:21:51.315 "adrfam": "IPv4", 00:21:51.315 "traddr": "10.0.0.2", 00:21:51.315 "trsvcid": "4420" 00:21:51.315 }, 00:21:51.315 "peer_address": { 00:21:51.315 "trtype": "TCP", 00:21:51.315 "adrfam": "IPv4", 00:21:51.315 "traddr": "10.0.0.1", 00:21:51.315 "trsvcid": "53268" 00:21:51.315 }, 00:21:51.315 "auth": { 00:21:51.315 "state": "completed", 00:21:51.315 "digest": "sha512", 00:21:51.315 "dhgroup": "ffdhe8192" 00:21:51.315 } 00:21:51.315 } 00:21:51.315 ]' 00:21:51.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.573 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.573 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.573 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.573 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.573 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.573 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.573 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.138 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2M2OGM5OWM5N2E0N2I3YTQyNTJkYzExOGRmMWJmYjZZRLvg: --dhchap-ctrl-secret DHHC-1:02:N2RkY2YwNTJiODgxZWZhMDhiMzRlZjA2NTIzMTRjN2YxNzJiNzcwZDg3NWU3ZTU3voljcA==: 00:21:53.070 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.070 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:53.070 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.070 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.070 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.070 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.070 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.070 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.636 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.569 00:21:54.569 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.569 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.569 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.827 { 00:21:54.827 "cntlid": 141, 00:21:54.827 "qid": 0, 00:21:54.827 "state": "enabled", 00:21:54.827 "thread": "nvmf_tgt_poll_group_000", 00:21:54.827 "listen_address": { 00:21:54.827 "trtype": "TCP", 00:21:54.827 "adrfam": "IPv4", 00:21:54.827 "traddr": "10.0.0.2", 00:21:54.827 "trsvcid": "4420" 00:21:54.827 }, 00:21:54.827 "peer_address": { 00:21:54.827 "trtype": "TCP", 00:21:54.827 "adrfam": "IPv4", 00:21:54.827 "traddr": "10.0.0.1", 00:21:54.827 "trsvcid": "47810" 00:21:54.827 }, 00:21:54.827 "auth": { 00:21:54.827 "state": "completed", 00:21:54.827 "digest": "sha512", 00:21:54.827 "dhgroup": "ffdhe8192" 00:21:54.827 } 00:21:54.827 } 00:21:54.827 ]' 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.827 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.085 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.085 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.085 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.085 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.085 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.343 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZGQxNTM4ZWY0MGI4ODA4NWRmY2FmYzJkMDU3NjBiYjA3MzU0YjI2OWZhNDBiMjc1BZX8xQ==: --dhchap-ctrl-secret DHHC-1:01:NjJjMDNmMjFjNDNhZjlmNWQ4ZjZmZDM1ZThjNDU2MDOXuzo+: 00:21:56.715 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.715 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:56.715 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.715 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.715 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.715 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.715 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.715 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.996 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.946 00:21:57.946 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.946 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.946 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.511 { 00:21:58.511 "cntlid": 143, 00:21:58.511 "qid": 0, 00:21:58.511 "state": "enabled", 00:21:58.511 "thread": "nvmf_tgt_poll_group_000", 00:21:58.511 "listen_address": { 00:21:58.511 "trtype": "TCP", 00:21:58.511 "adrfam": "IPv4", 00:21:58.511 "traddr": "10.0.0.2", 00:21:58.511 "trsvcid": "4420" 00:21:58.511 }, 00:21:58.511 "peer_address": { 00:21:58.511 "trtype": "TCP", 00:21:58.511 "adrfam": "IPv4", 00:21:58.511 "traddr": "10.0.0.1", 00:21:58.511 "trsvcid": "47838" 00:21:58.511 }, 00:21:58.511 "auth": { 00:21:58.511 "state": "completed", 00:21:58.511 "digest": "sha512", 00:21:58.511 "dhgroup": "ffdhe8192" 00:21:58.511 } 00:21:58.511 } 00:21:58.511 ]' 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.511 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.077 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:00.010 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.268 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.641 00:22:01.641 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.641 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.641 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.898 { 00:22:01.898 "cntlid": 145, 00:22:01.898 "qid": 0, 00:22:01.898 "state": "enabled", 00:22:01.898 "thread": "nvmf_tgt_poll_group_000", 00:22:01.898 "listen_address": { 00:22:01.898 "trtype": "TCP", 00:22:01.898 "adrfam": "IPv4", 00:22:01.898 "traddr": "10.0.0.2", 00:22:01.898 "trsvcid": "4420" 00:22:01.898 }, 00:22:01.898 "peer_address": { 00:22:01.898 "trtype": "TCP", 00:22:01.898 "adrfam": "IPv4", 00:22:01.898 "traddr": "10.0.0.1", 00:22:01.898 "trsvcid": "47870" 00:22:01.898 }, 00:22:01.898 "auth": { 00:22:01.898 "state": "completed", 00:22:01.898 "digest": "sha512", 00:22:01.898 "dhgroup": "ffdhe8192" 00:22:01.898 } 00:22:01.898 } 00:22:01.898 ]' 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.898 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.156 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.156 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.156 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.156 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.156 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.414 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZWY5NmU1N2Q1M2EyMGE0YzE3MGVkZTNlODQ3MTMzYmQ1Y2NlNzk5NjhlNTAxYWYzeKOhmw==: --dhchap-ctrl-secret DHHC-1:03:ZmJkOGEyOTEwOWE3OTM1NWQ2NmE4Y2FhOTBhZmFhYjc2YTcwYjI3MDJlZGM4OWEzOTVkYjMwNjBhNjQ4NjYxYlIaFU8=: 00:22:03.348 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.348 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:03.348 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.348 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:03.606 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:04.540 request: 00:22:04.540 { 00:22:04.540 "name": "nvme0", 00:22:04.540 "trtype": "tcp", 00:22:04.540 "traddr": "10.0.0.2", 00:22:04.540 "adrfam": "ipv4", 00:22:04.540 "trsvcid": "4420", 00:22:04.540 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:04.540 "prchk_reftag": false, 00:22:04.540 "prchk_guard": false, 00:22:04.540 "hdgst": false, 00:22:04.540 "ddgst": false, 00:22:04.540 "dhchap_key": "key2", 00:22:04.540 "method": "bdev_nvme_attach_controller", 00:22:04.540 "req_id": 1 00:22:04.540 } 00:22:04.540 Got JSON-RPC error response 00:22:04.540 response: 00:22:04.540 { 00:22:04.540 "code": -5, 00:22:04.540 "message": "Input/output error" 00:22:04.540 } 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.540 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.473 request: 00:22:05.473 { 00:22:05.473 "name": "nvme0", 00:22:05.473 "trtype": "tcp", 00:22:05.473 "traddr": "10.0.0.2", 00:22:05.473 "adrfam": "ipv4", 00:22:05.473 "trsvcid": "4420", 00:22:05.473 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:05.473 "prchk_reftag": false, 00:22:05.473 "prchk_guard": false, 00:22:05.473 "hdgst": false, 00:22:05.473 "ddgst": false, 00:22:05.473 "dhchap_key": "key1", 00:22:05.473 "dhchap_ctrlr_key": "ckey2", 00:22:05.473 "method": "bdev_nvme_attach_controller", 00:22:05.473 "req_id": 1 00:22:05.473 } 00:22:05.473 Got JSON-RPC error response 00:22:05.473 response: 00:22:05.473 { 00:22:05.473 "code": -5, 00:22:05.473 "message": "Input/output error" 00:22:05.473 } 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.473 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.407 request: 00:22:06.407 { 00:22:06.407 "name": "nvme0", 00:22:06.407 "trtype": "tcp", 00:22:06.407 "traddr": "10.0.0.2", 00:22:06.407 "adrfam": "ipv4", 00:22:06.407 "trsvcid": "4420", 00:22:06.407 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:06.407 "prchk_reftag": false, 00:22:06.407 "prchk_guard": false, 00:22:06.407 "hdgst": false, 00:22:06.407 "ddgst": false, 00:22:06.407 "dhchap_key": "key1", 00:22:06.407 "dhchap_ctrlr_key": "ckey1", 00:22:06.407 "method": "bdev_nvme_attach_controller", 00:22:06.407 "req_id": 1 00:22:06.407 } 00:22:06.407 Got JSON-RPC error response 00:22:06.407 response: 00:22:06.407 { 00:22:06.407 "code": -5, 00:22:06.407 "message": "Input/output error" 00:22:06.407 } 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2517050 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2517050 ']' 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2517050 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.407 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2517050 00:22:06.407 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.407 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.407 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2517050' 00:22:06.407 killing process with pid 2517050 00:22:06.407 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2517050 00:22:06.407 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2517050 00:22:06.665 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:06.665 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.665 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.665 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2545557 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2545557 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2545557 ']' 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.666 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2545557 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2545557 ']' 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.231 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.490 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.863 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.863 { 00:22:08.863 "cntlid": 1, 00:22:08.863 "qid": 0, 00:22:08.863 "state": "enabled", 00:22:08.863 "thread": "nvmf_tgt_poll_group_000", 00:22:08.863 "listen_address": { 00:22:08.863 "trtype": "TCP", 00:22:08.863 "adrfam": "IPv4", 00:22:08.863 "traddr": "10.0.0.2", 00:22:08.863 "trsvcid": "4420" 00:22:08.863 }, 00:22:08.863 "peer_address": { 00:22:08.863 "trtype": "TCP", 00:22:08.863 "adrfam": "IPv4", 00:22:08.863 "traddr": "10.0.0.1", 00:22:08.863 "trsvcid": "47746" 00:22:08.863 }, 00:22:08.863 "auth": { 00:22:08.863 "state": "completed", 00:22:08.863 "digest": "sha512", 00:22:08.863 "dhgroup": "ffdhe8192" 00:22:08.863 } 00:22:08.863 } 00:22:08.863 ]' 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.863 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.121 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.121 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.121 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.121 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.121 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.378 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MWE3YmNjYjZiOTRlNGRkYWQ4Yjg3NWUzMDdlOTY1YjQyNGVhMWYxOWQ3Yzg1YmQ0NWQ3NDViNzc4NWQ4NmY4Y50/KmA=: 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:10.751 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.316 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.574 request: 00:22:11.574 { 00:22:11.574 "name": "nvme0", 00:22:11.574 "trtype": "tcp", 00:22:11.574 "traddr": "10.0.0.2", 00:22:11.574 "adrfam": "ipv4", 00:22:11.574 "trsvcid": "4420", 00:22:11.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:11.574 "prchk_reftag": false, 00:22:11.574 "prchk_guard": false, 00:22:11.574 "hdgst": false, 00:22:11.574 "ddgst": false, 00:22:11.574 "dhchap_key": "key3", 00:22:11.574 "method": "bdev_nvme_attach_controller", 00:22:11.574 "req_id": 1 00:22:11.574 } 00:22:11.574 Got JSON-RPC error response 00:22:11.574 response: 00:22:11.574 { 00:22:11.574 "code": -5, 00:22:11.574 "message": "Input/output error" 00:22:11.574 } 00:22:11.574 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:11.574 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:11.574 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:11.574 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:11.574 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:11.574 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:11.574 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:11.574 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.835 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.139 request: 00:22:12.139 { 00:22:12.139 "name": "nvme0", 00:22:12.139 "trtype": "tcp", 00:22:12.139 "traddr": "10.0.0.2", 00:22:12.139 "adrfam": "ipv4", 00:22:12.139 "trsvcid": "4420", 00:22:12.139 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:12.139 "prchk_reftag": false, 00:22:12.139 "prchk_guard": false, 00:22:12.139 "hdgst": false, 00:22:12.139 "ddgst": false, 00:22:12.139 "dhchap_key": "key3", 00:22:12.139 "method": "bdev_nvme_attach_controller", 00:22:12.139 "req_id": 1 00:22:12.139 } 00:22:12.139 Got JSON-RPC error response 00:22:12.139 response: 00:22:12.139 { 00:22:12.139 "code": -5, 00:22:12.139 "message": "Input/output error" 00:22:12.139 } 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.139 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.705 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:12.706 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.706 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:12.706 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.706 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:12.706 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.706 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.706 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.272 request: 00:22:13.272 { 00:22:13.272 "name": "nvme0", 00:22:13.272 "trtype": "tcp", 00:22:13.272 "traddr": "10.0.0.2", 00:22:13.272 "adrfam": "ipv4", 00:22:13.272 "trsvcid": "4420", 00:22:13.272 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:13.272 "prchk_reftag": false, 00:22:13.272 "prchk_guard": false, 00:22:13.272 "hdgst": false, 00:22:13.272 "ddgst": false, 00:22:13.272 "dhchap_key": "key0", 00:22:13.272 "dhchap_ctrlr_key": "key1", 00:22:13.272 "method": "bdev_nvme_attach_controller", 00:22:13.272 "req_id": 1 00:22:13.272 } 00:22:13.272 Got JSON-RPC error response 00:22:13.272 response: 00:22:13.272 { 00:22:13.272 "code": -5, 00:22:13.272 "message": "Input/output error" 00:22:13.272 } 00:22:13.272 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:13.272 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.272 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.272 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.272 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:13.272 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:13.837 00:22:13.837 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:13.837 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:13.837 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.095 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.095 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.095 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2517140 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2517140 ']' 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2517140 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2517140 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2517140' 00:22:14.354 killing process with pid 2517140 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2517140 00:22:14.354 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2517140 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.920 rmmod nvme_tcp 00:22:14.920 rmmod nvme_fabrics 00:22:14.920 rmmod nvme_keyring 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2545557 ']' 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2545557 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2545557 ']' 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2545557 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2545557 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2545557' 00:22:14.920 killing process with pid 2545557 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2545557 00:22:14.920 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2545557 00:22:15.178 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.178 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.178 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.178 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.178 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.178 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.178 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.179 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hLm /tmp/spdk.key-sha256.stj /tmp/spdk.key-sha384.Zrq /tmp/spdk.key-sha512.dOJ /tmp/spdk.key-sha512.jXq /tmp/spdk.key-sha384.tlX /tmp/spdk.key-sha256.jLx '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:17.720 00:22:17.720 real 4m7.642s 00:22:17.720 user 9m53.442s 00:22:17.720 sys 0m33.272s 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.720 ************************************ 00:22:17.720 END TEST nvmf_auth_target 00:22:17.720 ************************************ 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.720 14:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.720 ************************************ 00:22:17.720 START TEST nvmf_bdevio_no_huge 00:22:17.720 ************************************ 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:17.721 * Looking for test storage... 00:22:17.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.721 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.255 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:20.256 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:20.256 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:20.256 Found net devices under 0000:84:00.0: cvl_0_0 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:20.256 Found net devices under 0000:84:00.1: cvl_0_1 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:20.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:22:20.256 00:22:20.256 --- 10.0.0.2 ping statistics --- 00:22:20.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.256 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:22:20.256 00:22:20.256 --- 10.0.0.1 ping statistics --- 00:22:20.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.256 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2548503 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2548503 00:22:20.256 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2548503 ']' 00:22:20.257 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.257 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.257 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.257 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.257 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.257 [2024-07-26 14:16:37.011847] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:22:20.257 [2024-07-26 14:16:37.011946] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:20.257 [2024-07-26 14:16:37.117530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.515 [2024-07-26 14:16:37.329716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.515 [2024-07-26 14:16:37.329828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.516 [2024-07-26 14:16:37.329865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.516 [2024-07-26 14:16:37.329894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.516 [2024-07-26 14:16:37.329920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.516 [2024-07-26 14:16:37.330075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:20.516 [2024-07-26 14:16:37.330177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:20.516 [2024-07-26 14:16:37.330259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:20.516 [2024-07-26 14:16:37.330262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.450 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.450 [2024-07-26 14:16:38.332817] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.709 Malloc0 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.709 [2024-07-26 14:16:38.374713] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.709 { 00:22:21.709 "params": { 00:22:21.709 "name": "Nvme$subsystem", 00:22:21.709 "trtype": "$TEST_TRANSPORT", 00:22:21.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.709 "adrfam": "ipv4", 00:22:21.709 "trsvcid": "$NVMF_PORT", 00:22:21.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.709 "hdgst": ${hdgst:-false}, 00:22:21.709 "ddgst": ${ddgst:-false} 00:22:21.709 }, 00:22:21.709 "method": "bdev_nvme_attach_controller" 00:22:21.709 } 00:22:21.709 EOF 00:22:21.709 )") 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:21.709 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:21.709 "params": { 00:22:21.709 "name": "Nvme1", 00:22:21.709 "trtype": "tcp", 00:22:21.709 "traddr": "10.0.0.2", 00:22:21.709 "adrfam": "ipv4", 00:22:21.709 "trsvcid": "4420", 00:22:21.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.709 "hdgst": false, 00:22:21.709 "ddgst": false 00:22:21.709 }, 00:22:21.709 "method": "bdev_nvme_attach_controller" 00:22:21.709 }' 00:22:21.709 [2024-07-26 14:16:38.443666] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:22:21.709 [2024-07-26 14:16:38.443770] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2548697 ] 00:22:21.709 [2024-07-26 14:16:38.558989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:21.967 [2024-07-26 14:16:38.686373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.967 [2024-07-26 14:16:38.686436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.967 [2024-07-26 14:16:38.686453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.226 I/O targets: 00:22:22.226 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:22.226 00:22:22.226 00:22:22.226 CUnit - A unit testing framework for C - Version 2.1-3 00:22:22.226 http://cunit.sourceforge.net/ 00:22:22.226 00:22:22.226 00:22:22.226 Suite: bdevio tests on: Nvme1n1 00:22:22.226 Test: blockdev write read block ...passed 00:22:22.226 Test: blockdev write zeroes read block ...passed 00:22:22.226 Test: blockdev write zeroes read no split ...passed 00:22:22.226 Test: blockdev write zeroes read split ...passed 00:22:22.226 Test: blockdev write zeroes read split partial ...passed 00:22:22.226 Test: blockdev reset ...[2024-07-26 14:16:39.102088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.226 [2024-07-26 14:16:39.102201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x874670 (9): Bad file descriptor 00:22:22.484 [2024-07-26 14:16:39.158761] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:22.484 passed 00:22:22.484 Test: blockdev write read 8 blocks ...passed 00:22:22.484 Test: blockdev write read size > 128k ...passed 00:22:22.484 Test: blockdev write read invalid size ...passed 00:22:22.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:22.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:22.484 Test: blockdev write read max offset ...passed 00:22:22.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:22.484 Test: blockdev writev readv 8 blocks ...passed 00:22:22.484 Test: blockdev writev readv 30 x 1block ...passed 00:22:22.484 Test: blockdev writev readv block ...passed 00:22:22.743 Test: blockdev writev readv size > 128k ...passed 00:22:22.743 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:22.743 Test: blockdev comparev and writev ...[2024-07-26 14:16:39.374323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.743 [2024-07-26 14:16:39.374361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.374389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.743 [2024-07-26 14:16:39.374409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.374888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.743 [2024-07-26 14:16:39.374916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.374941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.743 [2024-07-26 14:16:39.374958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.375409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.743 [2024-07-26 14:16:39.375443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.375469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.743 [2024-07-26 14:16:39.375487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.375907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.743 [2024-07-26 14:16:39.375933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.375958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.743 [2024-07-26 14:16:39.375976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:22.743 passed 00:22:22.743 Test: blockdev nvme passthru rw ...passed 00:22:22.743 Test: blockdev nvme passthru vendor specific ...[2024-07-26 14:16:39.457853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.743 [2024-07-26 14:16:39.457883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.458081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.743 [2024-07-26 14:16:39.458106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.458306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.743 [2024-07-26 14:16:39.458330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:22.743 [2024-07-26 14:16:39.458550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.743 [2024-07-26 14:16:39.458576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:22.743 passed 00:22:22.743 Test: blockdev nvme admin passthru ...passed 00:22:22.743 Test: blockdev copy ...passed 00:22:22.743 00:22:22.743 Run Summary: Type Total Ran Passed Failed Inactive 00:22:22.743 suites 1 1 n/a 0 0 00:22:22.743 tests 23 23 23 0 0 00:22:22.743 asserts 152 152 152 0 n/a 00:22:22.743 00:22:22.743 Elapsed time = 1.246 seconds 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.310 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.310 rmmod nvme_tcp 00:22:23.310 rmmod nvme_fabrics 00:22:23.310 rmmod nvme_keyring 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2548503 ']' 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2548503 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2548503 ']' 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2548503 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2548503 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2548503' 00:22:23.310 killing process with pid 2548503 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2548503 00:22:23.310 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2548503 00:22:23.876 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.876 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.876 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.876 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.876 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.876 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.876 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.876 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.409 00:22:26.409 real 0m8.620s 00:22:26.409 user 0m16.073s 00:22:26.409 sys 0m3.421s 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:26.409 ************************************ 00:22:26.409 END TEST nvmf_bdevio_no_huge 00:22:26.409 ************************************ 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:26.409 ************************************ 00:22:26.409 START TEST nvmf_tls 00:22:26.409 ************************************ 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:26.409 * Looking for test storage... 00:22:26.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.409 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.941 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:28.942 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:28.942 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:28.942 Found net devices under 0000:84:00.0: cvl_0_0 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:28.942 Found net devices under 0000:84:00.1: cvl_0_1 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:22:28.942 00:22:28.942 --- 10.0.0.2 ping statistics --- 00:22:28.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.942 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:22:28.942 00:22:28.942 --- 10.0.0.1 ping statistics --- 00:22:28.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.942 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2550986 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2550986 00:22:28.942 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2550986 ']' 00:22:28.943 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.943 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.943 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.943 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.943 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.943 [2024-07-26 14:16:45.720192] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:22:28.943 [2024-07-26 14:16:45.720289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.943 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.943 [2024-07-26 14:16:45.804179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.208 [2024-07-26 14:16:45.927131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.208 [2024-07-26 14:16:45.927193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.208 [2024-07-26 14:16:45.927210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.208 [2024-07-26 14:16:45.927223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.208 [2024-07-26 14:16:45.927235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.208 [2024-07-26 14:16:45.927266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.208 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.208 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:29.208 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.208 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:29.208 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.208 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.208 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:29.208 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:29.774 true 00:22:29.774 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.774 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:30.032 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:30.032 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:30.032 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:30.599 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.599 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:30.857 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:30.857 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:30.857 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:31.116 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:31.116 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:31.682 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:31.682 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:31.682 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:31.682 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:31.941 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:31.941 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:31.941 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:32.199 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:32.199 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:32.457 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:32.457 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:32.458 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:32.716 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:32.716 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.A7XRJ2WrQt 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.GHbDnOicIm 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.A7XRJ2WrQt 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GHbDnOicIm 00:22:33.283 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:33.543 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:34.109 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.A7XRJ2WrQt 00:22:34.109 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.A7XRJ2WrQt 00:22:34.109 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:34.367 [2024-07-26 14:16:51.103906] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.367 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:34.625 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:35.193 [2024-07-26 14:16:51.789818] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.193 [2024-07-26 14:16:51.790145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.193 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:35.452 malloc0 00:22:35.452 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:35.710 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.A7XRJ2WrQt 00:22:35.968 [2024-07-26 14:16:52.807202] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:35.968 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.A7XRJ2WrQt 00:22:36.227 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.191 Initializing NVMe Controllers 00:22:46.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.191 Initialization complete. Launching workers. 00:22:46.191 ======================================================== 00:22:46.191 Latency(us) 00:22:46.191 Device Information : IOPS MiB/s Average min max 00:22:46.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7345.09 28.69 8716.25 1237.29 10947.94 00:22:46.191 ======================================================== 00:22:46.191 Total : 7345.09 28.69 8716.25 1237.29 10947.94 00:22:46.191 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A7XRJ2WrQt 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.A7XRJ2WrQt' 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2553013 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2553013 /var/tmp/bdevperf.sock 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2553013 ']' 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.191 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.191 [2024-07-26 14:17:03.012400] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:22:46.191 [2024-07-26 14:17:03.012524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553013 ] 00:22:46.191 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.450 [2024-07-26 14:17:03.093470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.450 [2024-07-26 14:17:03.232530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.727 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.727 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:46.727 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.A7XRJ2WrQt 00:22:46.986 [2024-07-26 14:17:03.643548] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.986 [2024-07-26 14:17:03.643695] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:46.986 TLSTESTn1 00:22:46.986 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:46.986 Running I/O for 10 seconds... 00:22:59.218 00:22:59.218 Latency(us) 00:22:59.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.218 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.218 Verification LBA range: start 0x0 length 0x2000 00:22:59.218 TLSTESTn1 : 10.04 2583.90 10.09 0.00 0.00 49409.45 8252.68 73011.96 00:22:59.218 =================================================================================================================== 00:22:59.218 Total : 2583.90 10.09 0.00 0.00 49409.45 8252.68 73011.96 00:22:59.218 0 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2553013 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2553013 ']' 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2553013 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2553013 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2553013' 00:22:59.218 killing process with pid 2553013 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2553013 00:22:59.218 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.218 00:22:59.218 Latency(us) 00:22:59.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.218 =================================================================================================================== 00:22:59.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.218 [2024-07-26 14:17:13.984078] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.218 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2553013 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GHbDnOicIm 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GHbDnOicIm 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GHbDnOicIm 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.218 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GHbDnOicIm' 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2554239 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2554239 /var/tmp/bdevperf.sock 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2554239 ']' 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.219 [2024-07-26 14:17:14.358112] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:22:59.219 [2024-07-26 14:17:14.358216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554239 ] 00:22:59.219 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.219 [2024-07-26 14:17:14.440755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.219 [2024-07-26 14:17:14.579355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:59.219 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GHbDnOicIm 00:22:59.219 [2024-07-26 14:17:15.042969] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.219 [2024-07-26 14:17:15.043153] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:59.219 [2024-07-26 14:17:15.052367] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:59.219 [2024-07-26 14:17:15.053197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198c6d0 (107): Transport endpoint is not connected 00:22:59.219 [2024-07-26 14:17:15.054177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198c6d0 (9): Bad file descriptor 00:22:59.219 [2024-07-26 14:17:15.055174] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.219 [2024-07-26 14:17:15.055219] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:59.219 [2024-07-26 14:17:15.055260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.219 request: 00:22:59.219 { 00:22:59.219 "name": "TLSTEST", 00:22:59.219 "trtype": "tcp", 00:22:59.219 "traddr": "10.0.0.2", 00:22:59.219 "adrfam": "ipv4", 00:22:59.219 "trsvcid": "4420", 00:22:59.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.219 "prchk_reftag": false, 00:22:59.219 "prchk_guard": false, 00:22:59.219 "hdgst": false, 00:22:59.219 "ddgst": false, 00:22:59.219 "psk": "/tmp/tmp.GHbDnOicIm", 00:22:59.219 "method": "bdev_nvme_attach_controller", 00:22:59.219 "req_id": 1 00:22:59.219 } 00:22:59.219 Got JSON-RPC error response 00:22:59.219 response: 00:22:59.219 { 00:22:59.219 "code": -5, 00:22:59.219 "message": "Input/output error" 00:22:59.219 } 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2554239 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2554239 ']' 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2554239 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2554239 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2554239' 00:22:59.219 killing process with pid 2554239 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2554239 00:22:59.219 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.219 00:22:59.219 Latency(us) 00:22:59.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.219 =================================================================================================================== 00:22:59.219 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.219 [2024-07-26 14:17:15.125213] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2554239 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A7XRJ2WrQt 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A7XRJ2WrQt 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A7XRJ2WrQt 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.A7XRJ2WrQt' 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2554365 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2554365 /var/tmp/bdevperf.sock 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2554365 ']' 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.219 [2024-07-26 14:17:15.502869] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:22:59.219 [2024-07-26 14:17:15.502968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554365 ] 00:22:59.219 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.219 [2024-07-26 14:17:15.582114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.219 [2024-07-26 14:17:15.709026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:59.219 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.A7XRJ2WrQt 00:22:59.478 [2024-07-26 14:17:16.122452] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.478 [2024-07-26 14:17:16.122619] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:59.478 [2024-07-26 14:17:16.135064] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:59.478 [2024-07-26 14:17:16.135110] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:59.479 [2024-07-26 14:17:16.135164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:59.479 [2024-07-26 14:17:16.135588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c046d0 (107): Transport endpoint is not connected 00:22:59.479 [2024-07-26 14:17:16.136568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c046d0 (9): Bad file descriptor 00:22:59.479 [2024-07-26 14:17:16.137573] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.479 [2024-07-26 14:17:16.137602] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:59.479 [2024-07-26 14:17:16.137633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.479 request: 00:22:59.479 { 00:22:59.479 "name": "TLSTEST", 00:22:59.479 "trtype": "tcp", 00:22:59.479 "traddr": "10.0.0.2", 00:22:59.479 "adrfam": "ipv4", 00:22:59.479 "trsvcid": "4420", 00:22:59.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.479 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:59.479 "prchk_reftag": false, 00:22:59.479 "prchk_guard": false, 00:22:59.479 "hdgst": false, 00:22:59.479 "ddgst": false, 00:22:59.479 "psk": "/tmp/tmp.A7XRJ2WrQt", 00:22:59.479 "method": "bdev_nvme_attach_controller", 00:22:59.479 "req_id": 1 00:22:59.479 } 00:22:59.479 Got JSON-RPC error response 00:22:59.479 response: 00:22:59.479 { 00:22:59.479 "code": -5, 00:22:59.479 "message": "Input/output error" 00:22:59.479 } 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2554365 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2554365 ']' 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2554365 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2554365 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2554365' 00:22:59.479 killing process with pid 2554365 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2554365 00:22:59.479 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.479 00:22:59.479 Latency(us) 00:22:59.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.479 =================================================================================================================== 00:22:59.479 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.479 [2024-07-26 14:17:16.210087] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.479 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2554365 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A7XRJ2WrQt 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A7XRJ2WrQt 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A7XRJ2WrQt 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.A7XRJ2WrQt' 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2554492 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2554492 /var/tmp/bdevperf.sock 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2554492 ']' 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.738 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.738 [2024-07-26 14:17:16.612284] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:22:59.738 [2024-07-26 14:17:16.612399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554492 ] 00:22:59.997 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.997 [2024-07-26 14:17:16.696386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.997 [2024-07-26 14:17:16.835388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.255 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.255 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:00.255 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.A7XRJ2WrQt 00:23:00.822 [2024-07-26 14:17:17.587012] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.822 [2024-07-26 14:17:17.587205] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:00.822 [2024-07-26 14:17:17.598959] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:00.822 [2024-07-26 14:17:17.599008] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:00.822 [2024-07-26 14:17:17.599063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:00.822 [2024-07-26 14:17:17.599405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15846d0 (107): Transport endpoint is not connected 00:23:00.822 [2024-07-26 14:17:17.600380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15846d0 (9): Bad file descriptor 00:23:00.823 [2024-07-26 14:17:17.601378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:00.823 [2024-07-26 14:17:17.601413] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:00.823 [2024-07-26 14:17:17.601462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:00.823 request: 00:23:00.823 { 00:23:00.823 "name": "TLSTEST", 00:23:00.823 "trtype": "tcp", 00:23:00.823 "traddr": "10.0.0.2", 00:23:00.823 "adrfam": "ipv4", 00:23:00.823 "trsvcid": "4420", 00:23:00.823 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.823 "prchk_reftag": false, 00:23:00.823 "prchk_guard": false, 00:23:00.823 "hdgst": false, 00:23:00.823 "ddgst": false, 00:23:00.823 "psk": "/tmp/tmp.A7XRJ2WrQt", 00:23:00.823 "method": "bdev_nvme_attach_controller", 00:23:00.823 "req_id": 1 00:23:00.823 } 00:23:00.823 Got JSON-RPC error response 00:23:00.823 response: 00:23:00.823 { 00:23:00.823 "code": -5, 00:23:00.823 "message": "Input/output error" 00:23:00.823 } 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2554492 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2554492 ']' 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2554492 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2554492 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2554492' 00:23:00.823 killing process with pid 2554492 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2554492 00:23:00.823 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.823 00:23:00.823 Latency(us) 00:23:00.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.823 =================================================================================================================== 00:23:00.823 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.823 [2024-07-26 14:17:17.681731] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:00.823 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2554492 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2554747 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2554747 /var/tmp/bdevperf.sock 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2554747 ']' 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.391 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.391 [2024-07-26 14:17:18.045061] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:01.391 [2024-07-26 14:17:18.045155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554747 ] 00:23:01.391 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.391 [2024-07-26 14:17:18.121388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.391 [2024-07-26 14:17:18.261642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.650 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.650 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:01.650 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:01.909 [2024-07-26 14:17:18.752056] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:01.909 [2024-07-26 14:17:18.753336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96fe10 (9): Bad file descriptor 00:23:01.909 [2024-07-26 14:17:18.754331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:01.909 [2024-07-26 14:17:18.754368] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:01.909 [2024-07-26 14:17:18.754408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:01.909 request: 00:23:01.909 { 00:23:01.909 "name": "TLSTEST", 00:23:01.909 "trtype": "tcp", 00:23:01.909 "traddr": "10.0.0.2", 00:23:01.909 "adrfam": "ipv4", 00:23:01.909 "trsvcid": "4420", 00:23:01.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.909 "prchk_reftag": false, 00:23:01.909 "prchk_guard": false, 00:23:01.909 "hdgst": false, 00:23:01.909 "ddgst": false, 00:23:01.909 "method": "bdev_nvme_attach_controller", 00:23:01.909 "req_id": 1 00:23:01.909 } 00:23:01.909 Got JSON-RPC error response 00:23:01.909 response: 00:23:01.909 { 00:23:01.909 "code": -5, 00:23:01.909 "message": "Input/output error" 00:23:01.909 } 00:23:01.909 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2554747 00:23:01.909 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2554747 ']' 00:23:01.909 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2554747 00:23:01.909 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:01.909 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.909 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2554747 00:23:02.168 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:02.168 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:02.168 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2554747' 00:23:02.168 killing process with pid 2554747 00:23:02.168 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2554747 00:23:02.168 Received shutdown signal, test time was about 10.000000 seconds 00:23:02.168 00:23:02.168 Latency(us) 00:23:02.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.168 =================================================================================================================== 00:23:02.168 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:02.168 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2554747 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2550986 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2550986 ']' 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2550986 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2550986 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2550986' 00:23:02.426 killing process with pid 2550986 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2550986 00:23:02.426 [2024-07-26 14:17:19.179301] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:02.426 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2550986 00:23:02.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:02.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:02.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:02.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:02.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:02.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:02.685 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.wcSOV3rOi5 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.wcSOV3rOi5 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2554910 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2554910 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2554910 ']' 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.944 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.944 [2024-07-26 14:17:19.693999] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:02.944 [2024-07-26 14:17:19.694099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.944 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.944 [2024-07-26 14:17:19.777310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.202 [2024-07-26 14:17:19.920007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.202 [2024-07-26 14:17:19.920081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.202 [2024-07-26 14:17:19.920101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.202 [2024-07-26 14:17:19.920120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.202 [2024-07-26 14:17:19.920135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.202 [2024-07-26 14:17:19.920174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.wcSOV3rOi5 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wcSOV3rOi5 00:23:03.459 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.023 [2024-07-26 14:17:20.689751] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.023 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:04.281 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:04.539 [2024-07-26 14:17:21.319446] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:04.539 [2024-07-26 14:17:21.319725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.539 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:04.797 malloc0 00:23:04.797 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:05.055 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wcSOV3rOi5 00:23:05.313 [2024-07-26 14:17:22.135094] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wcSOV3rOi5 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wcSOV3rOi5' 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2555193 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2555193 /var/tmp/bdevperf.sock 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2555193 ']' 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.313 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.572 [2024-07-26 14:17:22.209812] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:05.572 [2024-07-26 14:17:22.209897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555193 ] 00:23:05.572 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.572 [2024-07-26 14:17:22.285554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.572 [2024-07-26 14:17:22.425642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.830 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.830 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:05.830 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wcSOV3rOi5 00:23:06.088 [2024-07-26 14:17:22.949911] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.088 [2024-07-26 14:17:22.950080] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:06.345 TLSTESTn1 00:23:06.345 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:06.345 Running I/O for 10 seconds... 00:23:18.552 00:23:18.552 Latency(us) 00:23:18.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.552 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.552 Verification LBA range: start 0x0 length 0x2000 00:23:18.552 TLSTESTn1 : 10.04 2574.84 10.06 0.00 0.00 49574.97 7815.77 59419.31 00:23:18.552 =================================================================================================================== 00:23:18.552 Total : 2574.84 10.06 0.00 0.00 49574.97 7815.77 59419.31 00:23:18.552 0 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2555193 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2555193 ']' 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2555193 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2555193 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2555193' 00:23:18.552 killing process with pid 2555193 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2555193 00:23:18.552 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.552 00:23:18.552 Latency(us) 00:23:18.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.552 =================================================================================================================== 00:23:18.552 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.552 [2024-07-26 14:17:33.307664] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2555193 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.wcSOV3rOi5 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wcSOV3rOi5 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wcSOV3rOi5 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wcSOV3rOi5 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wcSOV3rOi5' 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2556515 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2556515 /var/tmp/bdevperf.sock 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2556515 ']' 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.552 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.552 [2024-07-26 14:17:33.687522] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:18.552 [2024-07-26 14:17:33.687632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2556515 ] 00:23:18.552 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.552 [2024-07-26 14:17:33.771504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.552 [2024-07-26 14:17:33.908937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.552 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.552 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:18.552 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wcSOV3rOi5 00:23:18.552 [2024-07-26 14:17:34.603871] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.552 [2024-07-26 14:17:34.604002] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:18.552 [2024-07-26 14:17:34.604035] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.wcSOV3rOi5 00:23:18.552 request: 00:23:18.552 { 00:23:18.552 "name": "TLSTEST", 00:23:18.552 "trtype": "tcp", 00:23:18.552 "traddr": "10.0.0.2", 00:23:18.552 "adrfam": "ipv4", 00:23:18.552 "trsvcid": "4420", 00:23:18.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.552 "prchk_reftag": false, 00:23:18.552 "prchk_guard": false, 00:23:18.552 "hdgst": false, 00:23:18.552 "ddgst": false, 00:23:18.552 "psk": "/tmp/tmp.wcSOV3rOi5", 00:23:18.552 "method": "bdev_nvme_attach_controller", 00:23:18.552 "req_id": 1 00:23:18.552 } 00:23:18.552 Got JSON-RPC error response 00:23:18.552 response: 00:23:18.552 { 00:23:18.552 "code": -1, 00:23:18.552 "message": "Operation not permitted" 00:23:18.552 } 00:23:18.552 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2556515 00:23:18.552 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2556515 ']' 00:23:18.552 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2556515 00:23:18.552 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:18.552 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2556515 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2556515' 00:23:18.553 killing process with pid 2556515 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2556515 00:23:18.553 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.553 00:23:18.553 Latency(us) 00:23:18.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.553 =================================================================================================================== 00:23:18.553 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2556515 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2554910 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2554910 ']' 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2554910 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.553 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2554910 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2554910' 00:23:18.553 killing process with pid 2554910 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2554910 00:23:18.553 [2024-07-26 14:17:35.027707] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2554910 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2556779 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2556779 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2556779 ']' 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.553 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.553 [2024-07-26 14:17:35.424386] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:18.553 [2024-07-26 14:17:35.424509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.814 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.814 [2024-07-26 14:17:35.515305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.814 [2024-07-26 14:17:35.654613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.814 [2024-07-26 14:17:35.654693] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.814 [2024-07-26 14:17:35.654713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.814 [2024-07-26 14:17:35.654730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.814 [2024-07-26 14:17:35.654745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.814 [2024-07-26 14:17:35.654784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.108 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.108 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:19.108 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.108 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.108 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.wcSOV3rOi5 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wcSOV3rOi5 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.wcSOV3rOi5 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wcSOV3rOi5 00:23:19.109 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:19.367 [2024-07-26 14:17:36.173902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.367 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:19.625 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:20.191 [2024-07-26 14:17:36.831714] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.191 [2024-07-26 14:17:36.832030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.191 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:20.757 malloc0 00:23:20.757 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:21.015 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wcSOV3rOi5 00:23:21.273 [2024-07-26 14:17:38.078115] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:21.273 [2024-07-26 14:17:38.078170] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:21.273 [2024-07-26 14:17:38.078213] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:21.273 request: 00:23:21.273 { 00:23:21.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.273 "host": "nqn.2016-06.io.spdk:host1", 00:23:21.273 "psk": "/tmp/tmp.wcSOV3rOi5", 00:23:21.273 "method": "nvmf_subsystem_add_host", 00:23:21.273 "req_id": 1 00:23:21.273 } 00:23:21.273 Got JSON-RPC error response 00:23:21.273 response: 00:23:21.273 { 00:23:21.273 "code": -32603, 00:23:21.273 "message": "Internal error" 00:23:21.273 } 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2556779 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2556779 ']' 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2556779 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2556779 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2556779' 00:23:21.273 killing process with pid 2556779 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2556779 00:23:21.273 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2556779 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.wcSOV3rOi5 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2557094 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2557094 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2557094 ']' 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.840 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.840 [2024-07-26 14:17:38.559602] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:21.840 [2024-07-26 14:17:38.559694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.840 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.840 [2024-07-26 14:17:38.640864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.099 [2024-07-26 14:17:38.778245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.099 [2024-07-26 14:17:38.778313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.099 [2024-07-26 14:17:38.778333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.099 [2024-07-26 14:17:38.778349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.099 [2024-07-26 14:17:38.778363] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.099 [2024-07-26 14:17:38.778398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.wcSOV3rOi5 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wcSOV3rOi5 00:23:22.099 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.357 [2024-07-26 14:17:39.229275] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.616 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.876 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:23.135 [2024-07-26 14:17:39.831012] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.135 [2024-07-26 14:17:39.831305] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.135 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:23.393 malloc0 00:23:23.393 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:23.652 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wcSOV3rOi5 00:23:23.910 [2024-07-26 14:17:40.751373] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:23.910 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2557379 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2557379 /var/tmp/bdevperf.sock 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2557379 ']' 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.911 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.170 [2024-07-26 14:17:40.825185] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:24.170 [2024-07-26 14:17:40.825274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2557379 ] 00:23:24.170 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.170 [2024-07-26 14:17:40.901918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.170 [2024-07-26 14:17:41.043495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.430 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.430 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:24.430 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wcSOV3rOi5 00:23:24.707 [2024-07-26 14:17:41.527448] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.707 [2024-07-26 14:17:41.527615] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:24.965 TLSTESTn1 00:23:24.965 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:25.224 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:25.224 "subsystems": [ 00:23:25.224 { 00:23:25.224 "subsystem": "keyring", 00:23:25.224 "config": [] 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "subsystem": "iobuf", 00:23:25.224 "config": [ 00:23:25.224 { 00:23:25.224 "method": "iobuf_set_options", 00:23:25.224 "params": { 00:23:25.224 "small_pool_count": 8192, 00:23:25.224 "large_pool_count": 1024, 00:23:25.224 "small_bufsize": 8192, 00:23:25.224 "large_bufsize": 135168 00:23:25.224 } 00:23:25.224 } 00:23:25.224 ] 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "subsystem": "sock", 00:23:25.224 "config": [ 00:23:25.224 { 00:23:25.224 "method": "sock_set_default_impl", 00:23:25.224 "params": { 00:23:25.224 "impl_name": "posix" 00:23:25.224 } 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "method": "sock_impl_set_options", 00:23:25.224 "params": { 00:23:25.224 "impl_name": "ssl", 00:23:25.224 "recv_buf_size": 4096, 00:23:25.224 "send_buf_size": 4096, 00:23:25.224 "enable_recv_pipe": true, 00:23:25.224 "enable_quickack": false, 00:23:25.224 "enable_placement_id": 0, 00:23:25.224 "enable_zerocopy_send_server": true, 00:23:25.224 "enable_zerocopy_send_client": false, 00:23:25.224 "zerocopy_threshold": 0, 00:23:25.224 "tls_version": 0, 00:23:25.224 "enable_ktls": false 00:23:25.224 } 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "method": "sock_impl_set_options", 00:23:25.224 "params": { 00:23:25.224 "impl_name": "posix", 00:23:25.224 "recv_buf_size": 2097152, 00:23:25.224 "send_buf_size": 2097152, 00:23:25.224 "enable_recv_pipe": true, 00:23:25.224 "enable_quickack": false, 00:23:25.224 "enable_placement_id": 0, 00:23:25.224 "enable_zerocopy_send_server": true, 00:23:25.224 "enable_zerocopy_send_client": false, 00:23:25.224 "zerocopy_threshold": 0, 00:23:25.224 "tls_version": 0, 00:23:25.224 "enable_ktls": false 00:23:25.224 } 00:23:25.224 } 00:23:25.224 ] 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "subsystem": "vmd", 00:23:25.224 "config": [] 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "subsystem": "accel", 00:23:25.224 "config": [ 00:23:25.224 { 00:23:25.224 "method": "accel_set_options", 00:23:25.224 "params": { 00:23:25.224 "small_cache_size": 128, 00:23:25.224 "large_cache_size": 16, 00:23:25.224 "task_count": 2048, 00:23:25.224 "sequence_count": 2048, 00:23:25.224 "buf_count": 2048 00:23:25.224 } 00:23:25.224 } 00:23:25.224 ] 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "subsystem": "bdev", 00:23:25.224 "config": [ 00:23:25.224 { 00:23:25.224 "method": "bdev_set_options", 00:23:25.224 "params": { 00:23:25.224 "bdev_io_pool_size": 65535, 00:23:25.224 "bdev_io_cache_size": 256, 00:23:25.224 "bdev_auto_examine": true, 00:23:25.224 "iobuf_small_cache_size": 128, 00:23:25.224 "iobuf_large_cache_size": 16 00:23:25.224 } 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "method": "bdev_raid_set_options", 00:23:25.224 "params": { 00:23:25.224 "process_window_size_kb": 1024, 00:23:25.224 "process_max_bandwidth_mb_sec": 0 00:23:25.224 } 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "method": "bdev_iscsi_set_options", 00:23:25.224 "params": { 00:23:25.224 "timeout_sec": 30 00:23:25.224 } 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "method": "bdev_nvme_set_options", 00:23:25.224 "params": { 00:23:25.224 "action_on_timeout": "none", 00:23:25.224 "timeout_us": 0, 00:23:25.224 "timeout_admin_us": 0, 00:23:25.224 "keep_alive_timeout_ms": 10000, 00:23:25.224 "arbitration_burst": 0, 00:23:25.224 "low_priority_weight": 0, 00:23:25.224 "medium_priority_weight": 0, 00:23:25.224 "high_priority_weight": 0, 00:23:25.224 "nvme_adminq_poll_period_us": 10000, 00:23:25.224 "nvme_ioq_poll_period_us": 0, 00:23:25.224 "io_queue_requests": 0, 00:23:25.224 "delay_cmd_submit": true, 00:23:25.224 "transport_retry_count": 4, 00:23:25.224 "bdev_retry_count": 3, 00:23:25.224 "transport_ack_timeout": 0, 00:23:25.224 "ctrlr_loss_timeout_sec": 0, 00:23:25.224 "reconnect_delay_sec": 0, 00:23:25.224 "fast_io_fail_timeout_sec": 0, 00:23:25.224 "disable_auto_failback": false, 00:23:25.224 "generate_uuids": false, 00:23:25.224 "transport_tos": 0, 00:23:25.224 "nvme_error_stat": false, 00:23:25.224 "rdma_srq_size": 0, 00:23:25.224 "io_path_stat": false, 00:23:25.224 "allow_accel_sequence": false, 00:23:25.224 "rdma_max_cq_size": 0, 00:23:25.224 "rdma_cm_event_timeout_ms": 0, 00:23:25.224 "dhchap_digests": [ 00:23:25.224 "sha256", 00:23:25.224 "sha384", 00:23:25.224 "sha512" 00:23:25.224 ], 00:23:25.224 "dhchap_dhgroups": [ 00:23:25.224 "null", 00:23:25.224 "ffdhe2048", 00:23:25.224 "ffdhe3072", 00:23:25.224 "ffdhe4096", 00:23:25.224 "ffdhe6144", 00:23:25.224 "ffdhe8192" 00:23:25.224 ] 00:23:25.224 } 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "method": "bdev_nvme_set_hotplug", 00:23:25.224 "params": { 00:23:25.224 "period_us": 100000, 00:23:25.224 "enable": false 00:23:25.224 } 00:23:25.224 }, 00:23:25.224 { 00:23:25.224 "method": "bdev_malloc_create", 00:23:25.224 "params": { 00:23:25.224 "name": "malloc0", 00:23:25.224 "num_blocks": 8192, 00:23:25.224 "block_size": 4096, 00:23:25.224 "physical_block_size": 4096, 00:23:25.224 "uuid": "2fc537ae-5b52-4d6c-b878-95faae62fd91", 00:23:25.224 "optimal_io_boundary": 0, 00:23:25.224 "md_size": 0, 00:23:25.224 "dif_type": 0, 00:23:25.224 "dif_is_head_of_md": false, 00:23:25.224 "dif_pi_format": 0 00:23:25.224 } 00:23:25.224 }, 00:23:25.225 { 00:23:25.225 "method": "bdev_wait_for_examine" 00:23:25.225 } 00:23:25.225 ] 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "subsystem": "nbd", 00:23:25.225 "config": [] 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "subsystem": "scheduler", 00:23:25.225 "config": [ 00:23:25.225 { 00:23:25.225 "method": "framework_set_scheduler", 00:23:25.225 "params": { 00:23:25.225 "name": "static" 00:23:25.225 } 00:23:25.225 } 00:23:25.225 ] 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "subsystem": "nvmf", 00:23:25.225 "config": [ 00:23:25.225 { 00:23:25.225 "method": "nvmf_set_config", 00:23:25.225 "params": { 00:23:25.225 "discovery_filter": "match_any", 00:23:25.225 "admin_cmd_passthru": { 00:23:25.225 "identify_ctrlr": false 00:23:25.225 } 00:23:25.225 } 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "method": "nvmf_set_max_subsystems", 00:23:25.225 "params": { 00:23:25.225 "max_subsystems": 1024 00:23:25.225 } 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "method": "nvmf_set_crdt", 00:23:25.225 "params": { 00:23:25.225 "crdt1": 0, 00:23:25.225 "crdt2": 0, 00:23:25.225 "crdt3": 0 00:23:25.225 } 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "method": "nvmf_create_transport", 00:23:25.225 "params": { 00:23:25.225 "trtype": "TCP", 00:23:25.225 "max_queue_depth": 128, 00:23:25.225 "max_io_qpairs_per_ctrlr": 127, 00:23:25.225 "in_capsule_data_size": 4096, 00:23:25.225 "max_io_size": 131072, 00:23:25.225 "io_unit_size": 131072, 00:23:25.225 "max_aq_depth": 128, 00:23:25.225 "num_shared_buffers": 511, 00:23:25.225 "buf_cache_size": 4294967295, 00:23:25.225 "dif_insert_or_strip": false, 00:23:25.225 "zcopy": false, 00:23:25.225 "c2h_success": false, 00:23:25.225 "sock_priority": 0, 00:23:25.225 "abort_timeout_sec": 1, 00:23:25.225 "ack_timeout": 0, 00:23:25.225 "data_wr_pool_size": 0 00:23:25.225 } 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "method": "nvmf_create_subsystem", 00:23:25.225 "params": { 00:23:25.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.225 "allow_any_host": false, 00:23:25.225 "serial_number": "SPDK00000000000001", 00:23:25.225 "model_number": "SPDK bdev Controller", 00:23:25.225 "max_namespaces": 10, 00:23:25.225 "min_cntlid": 1, 00:23:25.225 "max_cntlid": 65519, 00:23:25.225 "ana_reporting": false 00:23:25.225 } 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "method": "nvmf_subsystem_add_host", 00:23:25.225 "params": { 00:23:25.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.225 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.225 "psk": "/tmp/tmp.wcSOV3rOi5" 00:23:25.225 } 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "method": "nvmf_subsystem_add_ns", 00:23:25.225 "params": { 00:23:25.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.225 "namespace": { 00:23:25.225 "nsid": 1, 00:23:25.225 "bdev_name": "malloc0", 00:23:25.225 "nguid": "2FC537AE5B524D6CB87895FAAE62FD91", 00:23:25.225 "uuid": "2fc537ae-5b52-4d6c-b878-95faae62fd91", 00:23:25.225 "no_auto_visible": false 00:23:25.225 } 00:23:25.225 } 00:23:25.225 }, 00:23:25.225 { 00:23:25.225 "method": "nvmf_subsystem_add_listener", 00:23:25.225 "params": { 00:23:25.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.225 "listen_address": { 00:23:25.225 "trtype": "TCP", 00:23:25.225 "adrfam": "IPv4", 00:23:25.225 "traddr": "10.0.0.2", 00:23:25.225 "trsvcid": "4420" 00:23:25.225 }, 00:23:25.225 "secure_channel": true 00:23:25.225 } 00:23:25.225 } 00:23:25.225 ] 00:23:25.225 } 00:23:25.225 ] 00:23:25.225 }' 00:23:25.225 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:25.839 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:25.839 "subsystems": [ 00:23:25.839 { 00:23:25.839 "subsystem": "keyring", 00:23:25.839 "config": [] 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "subsystem": "iobuf", 00:23:25.839 "config": [ 00:23:25.839 { 00:23:25.839 "method": "iobuf_set_options", 00:23:25.839 "params": { 00:23:25.839 "small_pool_count": 8192, 00:23:25.839 "large_pool_count": 1024, 00:23:25.839 "small_bufsize": 8192, 00:23:25.839 "large_bufsize": 135168 00:23:25.839 } 00:23:25.839 } 00:23:25.839 ] 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "subsystem": "sock", 00:23:25.839 "config": [ 00:23:25.839 { 00:23:25.839 "method": "sock_set_default_impl", 00:23:25.839 "params": { 00:23:25.839 "impl_name": "posix" 00:23:25.839 } 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "method": "sock_impl_set_options", 00:23:25.839 "params": { 00:23:25.839 "impl_name": "ssl", 00:23:25.839 "recv_buf_size": 4096, 00:23:25.839 "send_buf_size": 4096, 00:23:25.839 "enable_recv_pipe": true, 00:23:25.839 "enable_quickack": false, 00:23:25.839 "enable_placement_id": 0, 00:23:25.839 "enable_zerocopy_send_server": true, 00:23:25.839 "enable_zerocopy_send_client": false, 00:23:25.839 "zerocopy_threshold": 0, 00:23:25.839 "tls_version": 0, 00:23:25.839 "enable_ktls": false 00:23:25.839 } 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "method": "sock_impl_set_options", 00:23:25.839 "params": { 00:23:25.839 "impl_name": "posix", 00:23:25.839 "recv_buf_size": 2097152, 00:23:25.839 "send_buf_size": 2097152, 00:23:25.839 "enable_recv_pipe": true, 00:23:25.839 "enable_quickack": false, 00:23:25.839 "enable_placement_id": 0, 00:23:25.839 "enable_zerocopy_send_server": true, 00:23:25.839 "enable_zerocopy_send_client": false, 00:23:25.839 "zerocopy_threshold": 0, 00:23:25.839 "tls_version": 0, 00:23:25.839 "enable_ktls": false 00:23:25.839 } 00:23:25.839 } 00:23:25.839 ] 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "subsystem": "vmd", 00:23:25.839 "config": [] 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "subsystem": "accel", 00:23:25.839 "config": [ 00:23:25.839 { 00:23:25.839 "method": "accel_set_options", 00:23:25.839 "params": { 00:23:25.839 "small_cache_size": 128, 00:23:25.839 "large_cache_size": 16, 00:23:25.839 "task_count": 2048, 00:23:25.839 "sequence_count": 2048, 00:23:25.839 "buf_count": 2048 00:23:25.839 } 00:23:25.839 } 00:23:25.839 ] 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "subsystem": "bdev", 00:23:25.839 "config": [ 00:23:25.839 { 00:23:25.839 "method": "bdev_set_options", 00:23:25.839 "params": { 00:23:25.839 "bdev_io_pool_size": 65535, 00:23:25.839 "bdev_io_cache_size": 256, 00:23:25.839 "bdev_auto_examine": true, 00:23:25.839 "iobuf_small_cache_size": 128, 00:23:25.839 "iobuf_large_cache_size": 16 00:23:25.839 } 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "method": "bdev_raid_set_options", 00:23:25.839 "params": { 00:23:25.839 "process_window_size_kb": 1024, 00:23:25.839 "process_max_bandwidth_mb_sec": 0 00:23:25.839 } 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "method": "bdev_iscsi_set_options", 00:23:25.839 "params": { 00:23:25.839 "timeout_sec": 30 00:23:25.839 } 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "method": "bdev_nvme_set_options", 00:23:25.839 "params": { 00:23:25.839 "action_on_timeout": "none", 00:23:25.839 "timeout_us": 0, 00:23:25.839 "timeout_admin_us": 0, 00:23:25.839 "keep_alive_timeout_ms": 10000, 00:23:25.839 "arbitration_burst": 0, 00:23:25.839 "low_priority_weight": 0, 00:23:25.839 "medium_priority_weight": 0, 00:23:25.839 "high_priority_weight": 0, 00:23:25.839 "nvme_adminq_poll_period_us": 10000, 00:23:25.839 "nvme_ioq_poll_period_us": 0, 00:23:25.839 "io_queue_requests": 512, 00:23:25.839 "delay_cmd_submit": true, 00:23:25.839 "transport_retry_count": 4, 00:23:25.839 "bdev_retry_count": 3, 00:23:25.839 "transport_ack_timeout": 0, 00:23:25.839 "ctrlr_loss_timeout_sec": 0, 00:23:25.839 "reconnect_delay_sec": 0, 00:23:25.839 "fast_io_fail_timeout_sec": 0, 00:23:25.839 "disable_auto_failback": false, 00:23:25.839 "generate_uuids": false, 00:23:25.839 "transport_tos": 0, 00:23:25.839 "nvme_error_stat": false, 00:23:25.839 "rdma_srq_size": 0, 00:23:25.839 "io_path_stat": false, 00:23:25.839 "allow_accel_sequence": false, 00:23:25.839 "rdma_max_cq_size": 0, 00:23:25.839 "rdma_cm_event_timeout_ms": 0, 00:23:25.839 "dhchap_digests": [ 00:23:25.839 "sha256", 00:23:25.839 "sha384", 00:23:25.839 "sha512" 00:23:25.839 ], 00:23:25.839 "dhchap_dhgroups": [ 00:23:25.839 "null", 00:23:25.839 "ffdhe2048", 00:23:25.839 "ffdhe3072", 00:23:25.839 "ffdhe4096", 00:23:25.839 "ffdhe6144", 00:23:25.839 "ffdhe8192" 00:23:25.839 ] 00:23:25.839 } 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "method": "bdev_nvme_attach_controller", 00:23:25.839 "params": { 00:23:25.839 "name": "TLSTEST", 00:23:25.839 "trtype": "TCP", 00:23:25.839 "adrfam": "IPv4", 00:23:25.839 "traddr": "10.0.0.2", 00:23:25.839 "trsvcid": "4420", 00:23:25.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.839 "prchk_reftag": false, 00:23:25.839 "prchk_guard": false, 00:23:25.839 "ctrlr_loss_timeout_sec": 0, 00:23:25.839 "reconnect_delay_sec": 0, 00:23:25.839 "fast_io_fail_timeout_sec": 0, 00:23:25.839 "psk": "/tmp/tmp.wcSOV3rOi5", 00:23:25.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.839 "hdgst": false, 00:23:25.839 "ddgst": false 00:23:25.839 } 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "method": "bdev_nvme_set_hotplug", 00:23:25.839 "params": { 00:23:25.839 "period_us": 100000, 00:23:25.839 "enable": false 00:23:25.839 } 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "method": "bdev_wait_for_examine" 00:23:25.839 } 00:23:25.839 ] 00:23:25.839 }, 00:23:25.839 { 00:23:25.839 "subsystem": "nbd", 00:23:25.840 "config": [] 00:23:25.840 } 00:23:25.840 ] 00:23:25.840 }' 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2557379 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2557379 ']' 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2557379 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2557379 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2557379' 00:23:25.840 killing process with pid 2557379 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2557379 00:23:25.840 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.840 00:23:25.840 Latency(us) 00:23:25.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.840 =================================================================================================================== 00:23:25.840 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:25.840 [2024-07-26 14:17:42.470283] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:25.840 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2557379 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2557094 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2557094 ']' 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2557094 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2557094 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2557094' 00:23:26.118 killing process with pid 2557094 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2557094 00:23:26.118 [2024-07-26 14:17:42.833676] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:26.118 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2557094 00:23:26.377 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:26.377 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.377 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.377 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:26.377 "subsystems": [ 00:23:26.377 { 00:23:26.377 "subsystem": "keyring", 00:23:26.377 "config": [] 00:23:26.377 }, 00:23:26.377 { 00:23:26.377 "subsystem": "iobuf", 00:23:26.377 "config": [ 00:23:26.377 { 00:23:26.377 "method": "iobuf_set_options", 00:23:26.377 "params": { 00:23:26.377 "small_pool_count": 8192, 00:23:26.377 "large_pool_count": 1024, 00:23:26.377 "small_bufsize": 8192, 00:23:26.377 "large_bufsize": 135168 00:23:26.377 } 00:23:26.377 } 00:23:26.377 ] 00:23:26.377 }, 00:23:26.377 { 00:23:26.377 "subsystem": "sock", 00:23:26.377 "config": [ 00:23:26.377 { 00:23:26.377 "method": "sock_set_default_impl", 00:23:26.377 "params": { 00:23:26.377 "impl_name": "posix" 00:23:26.377 } 00:23:26.377 }, 00:23:26.377 { 00:23:26.377 "method": "sock_impl_set_options", 00:23:26.377 "params": { 00:23:26.377 "impl_name": "ssl", 00:23:26.377 "recv_buf_size": 4096, 00:23:26.378 "send_buf_size": 4096, 00:23:26.378 "enable_recv_pipe": true, 00:23:26.378 "enable_quickack": false, 00:23:26.378 "enable_placement_id": 0, 00:23:26.378 "enable_zerocopy_send_server": true, 00:23:26.378 "enable_zerocopy_send_client": false, 00:23:26.378 "zerocopy_threshold": 0, 00:23:26.378 "tls_version": 0, 00:23:26.378 "enable_ktls": false 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "sock_impl_set_options", 00:23:26.378 "params": { 00:23:26.378 "impl_name": "posix", 00:23:26.378 "recv_buf_size": 2097152, 00:23:26.378 "send_buf_size": 2097152, 00:23:26.378 "enable_recv_pipe": true, 00:23:26.378 "enable_quickack": false, 00:23:26.378 "enable_placement_id": 0, 00:23:26.378 "enable_zerocopy_send_server": true, 00:23:26.378 "enable_zerocopy_send_client": false, 00:23:26.378 "zerocopy_threshold": 0, 00:23:26.378 "tls_version": 0, 00:23:26.378 "enable_ktls": false 00:23:26.378 } 00:23:26.378 } 00:23:26.378 ] 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "subsystem": "vmd", 00:23:26.378 "config": [] 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "subsystem": "accel", 00:23:26.378 "config": [ 00:23:26.378 { 00:23:26.378 "method": "accel_set_options", 00:23:26.378 "params": { 00:23:26.378 "small_cache_size": 128, 00:23:26.378 "large_cache_size": 16, 00:23:26.378 "task_count": 2048, 00:23:26.378 "sequence_count": 2048, 00:23:26.378 "buf_count": 2048 00:23:26.378 } 00:23:26.378 } 00:23:26.378 ] 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "subsystem": "bdev", 00:23:26.378 "config": [ 00:23:26.378 { 00:23:26.378 "method": "bdev_set_options", 00:23:26.378 "params": { 00:23:26.378 "bdev_io_pool_size": 65535, 00:23:26.378 "bdev_io_cache_size": 256, 00:23:26.378 "bdev_auto_examine": true, 00:23:26.378 "iobuf_small_cache_size": 128, 00:23:26.378 "iobuf_large_cache_size": 16 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "bdev_raid_set_options", 00:23:26.378 "params": { 00:23:26.378 "process_window_size_kb": 1024, 00:23:26.378 "process_max_bandwidth_mb_sec": 0 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "bdev_iscsi_set_options", 00:23:26.378 "params": { 00:23:26.378 "timeout_sec": 30 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "bdev_nvme_set_options", 00:23:26.378 "params": { 00:23:26.378 "action_on_timeout": "none", 00:23:26.378 "timeout_us": 0, 00:23:26.378 "timeout_admin_us": 0, 00:23:26.378 "keep_alive_timeout_ms": 10000, 00:23:26.378 "arbitration_burst": 0, 00:23:26.378 "low_priority_weight": 0, 00:23:26.378 "medium_priority_weight": 0, 00:23:26.378 "high_priority_weight": 0, 00:23:26.378 "nvme_adminq_poll_period_us": 10000, 00:23:26.378 "nvme_ioq_poll_period_us": 0, 00:23:26.378 "io_queue_requests": 0, 00:23:26.378 "delay_cmd_submit": true, 00:23:26.378 "transport_retry_count": 4, 00:23:26.378 "bdev_retry_count": 3, 00:23:26.378 "transport_ack_timeout": 0, 00:23:26.378 "ctrlr_loss_timeout_sec": 0, 00:23:26.378 "reconnect_delay_sec": 0, 00:23:26.378 "fast_io_fail_timeout_sec": 0, 00:23:26.378 "disable_auto_failback": false, 00:23:26.378 "generate_uuids": false, 00:23:26.378 "transport_tos": 0, 00:23:26.378 "nvme_error_stat": false, 00:23:26.378 "rdma_srq_size": 0, 00:23:26.378 "io_path_stat": false, 00:23:26.378 "allow_accel_sequence": false, 00:23:26.378 "rdma_max_cq_size": 0, 00:23:26.378 "rdma_cm_event_timeout_ms": 0, 00:23:26.378 "dhchap_digests": [ 00:23:26.378 "sha256", 00:23:26.378 "sha384", 00:23:26.378 "sha512" 00:23:26.378 ], 00:23:26.378 "dhchap_dhgroups": [ 00:23:26.378 "null", 00:23:26.378 "ffdhe2048", 00:23:26.378 "ffdhe3072", 00:23:26.378 "ffdhe4096", 00:23:26.378 "ffdhe6144", 00:23:26.378 "ffdhe8192" 00:23:26.378 ] 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "bdev_nvme_set_hotplug", 00:23:26.378 "params": { 00:23:26.378 "period_us": 100000, 00:23:26.378 "enable": false 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "bdev_malloc_create", 00:23:26.378 "params": { 00:23:26.378 "name": "malloc0", 00:23:26.378 "num_blocks": 8192, 00:23:26.378 "block_size": 4096, 00:23:26.378 "physical_block_size": 4096, 00:23:26.378 "uuid": "2fc537ae-5b52-4d6c-b878-95faae62fd91", 00:23:26.378 "optimal_io_boundary": 0, 00:23:26.378 "md_size": 0, 00:23:26.378 "dif_type": 0, 00:23:26.378 "dif_is_head_of_md": false, 00:23:26.378 "dif_pi_format": 0 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "bdev_wait_for_examine" 00:23:26.378 } 00:23:26.378 ] 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "subsystem": "nbd", 00:23:26.378 "config": [] 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "subsystem": "scheduler", 00:23:26.378 "config": [ 00:23:26.378 { 00:23:26.378 "method": "framework_set_scheduler", 00:23:26.378 "params": { 00:23:26.378 "name": "static" 00:23:26.378 } 00:23:26.378 } 00:23:26.378 ] 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "subsystem": "nvmf", 00:23:26.378 "config": [ 00:23:26.378 { 00:23:26.378 "method": "nvmf_set_config", 00:23:26.378 "params": { 00:23:26.378 "discovery_filter": "match_any", 00:23:26.378 "admin_cmd_passthru": { 00:23:26.378 "identify_ctrlr": false 00:23:26.378 } 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "nvmf_set_max_subsystems", 00:23:26.378 "params": { 00:23:26.378 "max_subsystems": 1024 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "nvmf_set_crdt", 00:23:26.378 "params": { 00:23:26.378 "crdt1": 0, 00:23:26.378 "crdt2": 0, 00:23:26.378 "crdt3": 0 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "nvmf_create_transport", 00:23:26.378 "params": { 00:23:26.378 "trtype": "TCP", 00:23:26.378 "max_queue_depth": 128, 00:23:26.378 "max_io_qpairs_per_ctrlr": 127, 00:23:26.378 "in_capsule_data_size": 4096, 00:23:26.378 "max_io_size": 131072, 00:23:26.378 "io_unit_size": 131072, 00:23:26.378 "max_aq_depth": 128, 00:23:26.378 "num_shared_buffers": 511, 00:23:26.378 "buf_cache_size": 4294967295, 00:23:26.378 "dif_insert_or_strip": false, 00:23:26.378 "zcopy": false, 00:23:26.378 "c2h_success": false, 00:23:26.378 "sock_priority": 0, 00:23:26.378 "abort_timeout_sec": 1, 00:23:26.378 "ack_timeout": 0, 00:23:26.378 "data_wr_pool_size": 0 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "nvmf_create_subsystem", 00:23:26.378 "params": { 00:23:26.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.378 "allow_any_host": false, 00:23:26.378 "serial_number": "SPDK00000000000001", 00:23:26.378 "model_number": "SPDK bdev Controller", 00:23:26.378 "max_namespaces": 10, 00:23:26.378 "min_cntlid": 1, 00:23:26.378 "max_cntlid": 65519, 00:23:26.378 "ana_reporting": false 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "nvmf_subsystem_add_host", 00:23:26.378 "params": { 00:23:26.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.378 "host": "nqn.2016-06.io.spdk:host1", 00:23:26.378 "psk": "/tmp/tmp.wcSOV3rOi5" 00:23:26.378 } 00:23:26.378 }, 00:23:26.378 { 00:23:26.378 "method": "nvmf_subsystem_add_ns", 00:23:26.379 "params": { 00:23:26.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.379 "namespace": { 00:23:26.379 "nsid": 1, 00:23:26.379 "bdev_name": "malloc0", 00:23:26.379 "nguid": "2FC537AE5B524D6CB87895FAAE62FD91", 00:23:26.379 "uuid": "2fc537ae-5b52-4d6c-b878-95faae62fd91", 00:23:26.379 "no_auto_visible": false 00:23:26.379 } 00:23:26.379 } 00:23:26.379 }, 00:23:26.379 { 00:23:26.379 "method": "nvmf_subsystem_add_listener", 00:23:26.379 "params": { 00:23:26.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.379 "listen_address": { 00:23:26.379 "trtype": "TCP", 00:23:26.379 "adrfam": "IPv4", 00:23:26.379 "traddr": "10.0.0.2", 00:23:26.379 "trsvcid": "4420" 00:23:26.379 }, 00:23:26.379 "secure_channel": true 00:23:26.379 } 00:23:26.379 } 00:23:26.379 ] 00:23:26.379 } 00:23:26.379 ] 00:23:26.379 }' 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2557700 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2557700 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2557700 ']' 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.379 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.379 [2024-07-26 14:17:43.229744] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:26.379 [2024-07-26 14:17:43.229851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.637 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.637 [2024-07-26 14:17:43.318562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.637 [2024-07-26 14:17:43.458245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.637 [2024-07-26 14:17:43.458319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.637 [2024-07-26 14:17:43.458341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.637 [2024-07-26 14:17:43.458357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.637 [2024-07-26 14:17:43.458371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.637 [2024-07-26 14:17:43.458497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.895 [2024-07-26 14:17:43.716683] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.895 [2024-07-26 14:17:43.747403] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:26.895 [2024-07-26 14:17:43.763502] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.895 [2024-07-26 14:17:43.763807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.462 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.462 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:27.462 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.462 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.462 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2557871 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2557871 /var/tmp/bdevperf.sock 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2557871 ']' 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.721 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:27.721 "subsystems": [ 00:23:27.721 { 00:23:27.721 "subsystem": "keyring", 00:23:27.721 "config": [] 00:23:27.721 }, 00:23:27.721 { 00:23:27.721 "subsystem": "iobuf", 00:23:27.721 "config": [ 00:23:27.721 { 00:23:27.721 "method": "iobuf_set_options", 00:23:27.721 "params": { 00:23:27.721 "small_pool_count": 8192, 00:23:27.721 "large_pool_count": 1024, 00:23:27.721 "small_bufsize": 8192, 00:23:27.721 "large_bufsize": 135168 00:23:27.721 } 00:23:27.721 } 00:23:27.721 ] 00:23:27.721 }, 00:23:27.721 { 00:23:27.721 "subsystem": "sock", 00:23:27.721 "config": [ 00:23:27.721 { 00:23:27.721 "method": "sock_set_default_impl", 00:23:27.721 "params": { 00:23:27.721 "impl_name": "posix" 00:23:27.721 } 00:23:27.721 }, 00:23:27.721 { 00:23:27.721 "method": "sock_impl_set_options", 00:23:27.721 "params": { 00:23:27.721 "impl_name": "ssl", 00:23:27.721 "recv_buf_size": 4096, 00:23:27.721 "send_buf_size": 4096, 00:23:27.721 "enable_recv_pipe": true, 00:23:27.721 "enable_quickack": false, 00:23:27.721 "enable_placement_id": 0, 00:23:27.721 "enable_zerocopy_send_server": true, 00:23:27.721 "enable_zerocopy_send_client": false, 00:23:27.721 "zerocopy_threshold": 0, 00:23:27.721 "tls_version": 0, 00:23:27.721 "enable_ktls": false 00:23:27.721 } 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "method": "sock_impl_set_options", 00:23:27.722 "params": { 00:23:27.722 "impl_name": "posix", 00:23:27.722 "recv_buf_size": 2097152, 00:23:27.722 "send_buf_size": 2097152, 00:23:27.722 "enable_recv_pipe": true, 00:23:27.722 "enable_quickack": false, 00:23:27.722 "enable_placement_id": 0, 00:23:27.722 "enable_zerocopy_send_server": true, 00:23:27.722 "enable_zerocopy_send_client": false, 00:23:27.722 "zerocopy_threshold": 0, 00:23:27.722 "tls_version": 0, 00:23:27.722 "enable_ktls": false 00:23:27.722 } 00:23:27.722 } 00:23:27.722 ] 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "subsystem": "vmd", 00:23:27.722 "config": [] 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "subsystem": "accel", 00:23:27.722 "config": [ 00:23:27.722 { 00:23:27.722 "method": "accel_set_options", 00:23:27.722 "params": { 00:23:27.722 "small_cache_size": 128, 00:23:27.722 "large_cache_size": 16, 00:23:27.722 "task_count": 2048, 00:23:27.722 "sequence_count": 2048, 00:23:27.722 "buf_count": 2048 00:23:27.722 } 00:23:27.722 } 00:23:27.722 ] 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "subsystem": "bdev", 00:23:27.722 "config": [ 00:23:27.722 { 00:23:27.722 "method": "bdev_set_options", 00:23:27.722 "params": { 00:23:27.722 "bdev_io_pool_size": 65535, 00:23:27.722 "bdev_io_cache_size": 256, 00:23:27.722 "bdev_auto_examine": true, 00:23:27.722 "iobuf_small_cache_size": 128, 00:23:27.722 "iobuf_large_cache_size": 16 00:23:27.722 } 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "method": "bdev_raid_set_options", 00:23:27.722 "params": { 00:23:27.722 "process_window_size_kb": 1024, 00:23:27.722 "process_max_bandwidth_mb_sec": 0 00:23:27.722 } 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "method": "bdev_iscsi_set_options", 00:23:27.722 "params": { 00:23:27.722 "timeout_sec": 30 00:23:27.722 } 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "method": "bdev_nvme_set_options", 00:23:27.722 "params": { 00:23:27.722 "action_on_timeout": "none", 00:23:27.722 "timeout_us": 0, 00:23:27.722 "timeout_admin_us": 0, 00:23:27.722 "keep_alive_timeout_ms": 10000, 00:23:27.722 "arbitration_burst": 0, 00:23:27.722 "low_priority_weight": 0, 00:23:27.722 "medium_priority_weight": 0, 00:23:27.722 "high_priority_weight": 0, 00:23:27.722 "nvme_adminq_poll_period_us": 10000, 00:23:27.722 "nvme_ioq_poll_period_us": 0, 00:23:27.722 "io_queue_requests": 512, 00:23:27.722 "delay_cmd_submit": true, 00:23:27.722 "transport_retry_count": 4, 00:23:27.722 "bdev_retry_count": 3, 00:23:27.722 "transport_ack_timeout": 0, 00:23:27.722 "ctrlr_loss_timeout_sec": 0, 00:23:27.722 "reconnect_delay_sec": 0, 00:23:27.722 "fast_io_fail_timeout_sec": 0, 00:23:27.722 "disable_auto_failback": false, 00:23:27.722 "generate_uuids": false, 00:23:27.722 "transport_tos": 0, 00:23:27.722 "nvme_error_stat": false, 00:23:27.722 "rdma_srq_size": 0, 00:23:27.722 "io_path_stat": false, 00:23:27.722 "allow_accel_sequence": false, 00:23:27.722 "rdma_max_cq_size": 0, 00:23:27.722 "rdma_cm_event_timeout_ms": 0, 00:23:27.722 "dhchap_digests": [ 00:23:27.722 "sha256", 00:23:27.722 "sha384", 00:23:27.722 "sha512" 00:23:27.722 ], 00:23:27.722 "dhchap_dhgroups": [ 00:23:27.722 "null", 00:23:27.722 "ffdhe2048", 00:23:27.722 "ffdhe3072", 00:23:27.722 "ffdhe4096", 00:23:27.722 "ffdhe6144", 00:23:27.722 "ffdhe8192" 00:23:27.722 ] 00:23:27.722 } 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "method": "bdev_nvme_attach_controller", 00:23:27.722 "params": { 00:23:27.722 "name": "TLSTEST", 00:23:27.722 "trtype": "TCP", 00:23:27.722 "adrfam": "IPv4", 00:23:27.722 "traddr": "10.0.0.2", 00:23:27.722 "trsvcid": "4420", 00:23:27.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.722 "prchk_reftag": false, 00:23:27.722 "prchk_guard": false, 00:23:27.722 "ctrlr_loss_timeout_sec": 0, 00:23:27.722 "reconnect_delay_sec": 0, 00:23:27.722 "fast_io_fail_timeout_sec": 0, 00:23:27.722 "psk": "/tmp/tmp.wcSOV3rOi5", 00:23:27.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.722 "hdgst": false, 00:23:27.722 "ddgst": false 00:23:27.722 } 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "method": "bdev_nvme_set_hotplug", 00:23:27.722 "params": { 00:23:27.722 "period_us": 100000, 00:23:27.722 "enable": false 00:23:27.722 } 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "method": "bdev_wait_for_examine" 00:23:27.722 } 00:23:27.722 ] 00:23:27.722 }, 00:23:27.722 { 00:23:27.722 "subsystem": "nbd", 00:23:27.722 "config": [] 00:23:27.722 } 00:23:27.722 ] 00:23:27.722 }' 00:23:27.722 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.722 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.722 [2024-07-26 14:17:44.421545] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:27.722 [2024-07-26 14:17:44.421650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2557871 ] 00:23:27.722 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.722 [2024-07-26 14:17:44.501450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.981 [2024-07-26 14:17:44.640423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.981 [2024-07-26 14:17:44.833995] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.981 [2024-07-26 14:17:44.834206] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.916 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.916 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:28.916 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:28.916 Running I/O for 10 seconds... 00:23:38.900 00:23:38.900 Latency(us) 00:23:38.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.900 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:38.900 Verification LBA range: start 0x0 length 0x2000 00:23:38.900 TLSTESTn1 : 10.04 2594.88 10.14 0.00 0.00 49188.32 12427.57 74177.04 00:23:38.900 =================================================================================================================== 00:23:38.900 Total : 2594.88 10.14 0.00 0.00 49188.32 12427.57 74177.04 00:23:38.900 0 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2557871 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2557871 ']' 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2557871 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2557871 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2557871' 00:23:38.900 killing process with pid 2557871 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2557871 00:23:38.900 Received shutdown signal, test time was about 10.000000 seconds 00:23:38.900 00:23:38.900 Latency(us) 00:23:38.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.900 =================================================================================================================== 00:23:38.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.900 [2024-07-26 14:17:55.731050] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:38.900 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2557871 00:23:39.159 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2557700 00:23:39.159 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2557700 ']' 00:23:39.159 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2557700 00:23:39.159 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:39.418 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.418 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2557700 00:23:39.418 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:39.418 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:39.418 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2557700' 00:23:39.418 killing process with pid 2557700 00:23:39.418 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2557700 00:23:39.418 [2024-07-26 14:17:56.095125] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:39.418 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2557700 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2559254 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2559254 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2559254 ']' 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.676 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.676 [2024-07-26 14:17:56.543180] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:39.676 [2024-07-26 14:17:56.543354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.934 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.934 [2024-07-26 14:17:56.667257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.934 [2024-07-26 14:17:56.789579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.934 [2024-07-26 14:17:56.789654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.934 [2024-07-26 14:17:56.789671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.934 [2024-07-26 14:17:56.789684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.934 [2024-07-26 14:17:56.789696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.934 [2024-07-26 14:17:56.789732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.wcSOV3rOi5 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wcSOV3rOi5 00:23:40.191 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:40.447 [2024-07-26 14:17:57.267775] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.447 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:40.704 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:41.270 [2024-07-26 14:17:58.073901] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.270 [2024-07-26 14:17:58.074153] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.270 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:41.835 malloc0 00:23:41.835 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:42.400 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wcSOV3rOi5 00:23:42.965 [2024-07-26 14:17:59.687107] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2559675 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2559675 /var/tmp/bdevperf.sock 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2559675 ']' 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.965 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.965 [2024-07-26 14:17:59.797762] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:42.965 [2024-07-26 14:17:59.797914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559675 ] 00:23:43.222 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.222 [2024-07-26 14:17:59.902015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.222 [2024-07-26 14:18:00.028810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.480 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.480 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:43.480 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wcSOV3rOi5 00:23:43.737 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:44.303 [2024-07-26 14:18:00.921094] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.303 nvme0n1 00:23:44.303 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.303 Running I/O for 1 seconds... 00:23:45.675 00:23:45.675 Latency(us) 00:23:45.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.675 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:45.675 Verification LBA range: start 0x0 length 0x2000 00:23:45.675 nvme0n1 : 1.05 2680.84 10.47 0.00 0.00 46728.75 6747.78 68351.62 00:23:45.675 =================================================================================================================== 00:23:45.675 Total : 2680.84 10.47 0.00 0.00 46728.75 6747.78 68351.62 00:23:45.675 0 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2559675 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2559675 ']' 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2559675 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2559675 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2559675' 00:23:45.675 killing process with pid 2559675 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2559675 00:23:45.675 Received shutdown signal, test time was about 1.000000 seconds 00:23:45.675 00:23:45.675 Latency(us) 00:23:45.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.675 =================================================================================================================== 00:23:45.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.675 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2559675 00:23:45.676 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2559254 00:23:45.676 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2559254 ']' 00:23:45.676 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2559254 00:23:45.676 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:45.676 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:45.676 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2559254 00:23:45.933 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:45.933 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:45.934 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2559254' 00:23:45.934 killing process with pid 2559254 00:23:45.934 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2559254 00:23:45.934 [2024-07-26 14:18:02.575351] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:45.934 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2559254 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2560116 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2560116 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2560116 ']' 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.192 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.192 [2024-07-26 14:18:02.919461] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:46.192 [2024-07-26 14:18:02.919559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.192 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.192 [2024-07-26 14:18:02.997803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.450 [2024-07-26 14:18:03.120248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.451 [2024-07-26 14:18:03.120301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.451 [2024-07-26 14:18:03.120318] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.451 [2024-07-26 14:18:03.120332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.451 [2024-07-26 14:18:03.120344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.451 [2024-07-26 14:18:03.120373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.451 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.451 [2024-07-26 14:18:03.283307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.451 malloc0 00:23:46.451 [2024-07-26 14:18:03.316304] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.451 [2024-07-26 14:18:03.325667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2560214 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2560214 /var/tmp/bdevperf.sock 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2560214 ']' 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.710 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.710 [2024-07-26 14:18:03.396738] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:46.710 [2024-07-26 14:18:03.396815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560214 ] 00:23:46.710 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.710 [2024-07-26 14:18:03.464152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.710 [2024-07-26 14:18:03.588808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.275 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.275 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:47.275 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wcSOV3rOi5 00:23:47.533 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:47.791 [2024-07-26 14:18:04.608191] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.050 nvme0n1 00:23:48.050 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.050 Running I/O for 1 seconds... 00:23:49.423 00:23:49.423 Latency(us) 00:23:49.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.423 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:49.423 Verification LBA range: start 0x0 length 0x2000 00:23:49.423 nvme0n1 : 1.05 2397.78 9.37 0.00 0.00 52237.75 6650.69 79614.10 00:23:49.423 =================================================================================================================== 00:23:49.423 Total : 2397.78 9.37 0.00 0.00 52237.75 6650.69 79614.10 00:23:49.423 0 00:23:49.423 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:49.423 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.423 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.423 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.423 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:49.423 "subsystems": [ 00:23:49.423 { 00:23:49.423 "subsystem": "keyring", 00:23:49.423 "config": [ 00:23:49.423 { 00:23:49.423 "method": "keyring_file_add_key", 00:23:49.423 "params": { 00:23:49.423 "name": "key0", 00:23:49.423 "path": "/tmp/tmp.wcSOV3rOi5" 00:23:49.423 } 00:23:49.423 } 00:23:49.423 ] 00:23:49.423 }, 00:23:49.423 { 00:23:49.423 "subsystem": "iobuf", 00:23:49.423 "config": [ 00:23:49.423 { 00:23:49.423 "method": "iobuf_set_options", 00:23:49.423 "params": { 00:23:49.423 "small_pool_count": 8192, 00:23:49.423 "large_pool_count": 1024, 00:23:49.423 "small_bufsize": 8192, 00:23:49.423 "large_bufsize": 135168 00:23:49.423 } 00:23:49.423 } 00:23:49.423 ] 00:23:49.423 }, 00:23:49.423 { 00:23:49.423 "subsystem": "sock", 00:23:49.423 "config": [ 00:23:49.423 { 00:23:49.423 "method": "sock_set_default_impl", 00:23:49.423 "params": { 00:23:49.423 "impl_name": "posix" 00:23:49.423 } 00:23:49.423 }, 00:23:49.423 { 00:23:49.423 "method": "sock_impl_set_options", 00:23:49.423 "params": { 00:23:49.423 "impl_name": "ssl", 00:23:49.423 "recv_buf_size": 4096, 00:23:49.423 "send_buf_size": 4096, 00:23:49.423 "enable_recv_pipe": true, 00:23:49.423 "enable_quickack": false, 00:23:49.423 "enable_placement_id": 0, 00:23:49.423 "enable_zerocopy_send_server": true, 00:23:49.423 "enable_zerocopy_send_client": false, 00:23:49.423 "zerocopy_threshold": 0, 00:23:49.423 "tls_version": 0, 00:23:49.423 "enable_ktls": false 00:23:49.423 } 00:23:49.423 }, 00:23:49.423 { 00:23:49.423 "method": "sock_impl_set_options", 00:23:49.423 "params": { 00:23:49.423 "impl_name": "posix", 00:23:49.423 "recv_buf_size": 2097152, 00:23:49.423 "send_buf_size": 2097152, 00:23:49.423 "enable_recv_pipe": true, 00:23:49.423 "enable_quickack": false, 00:23:49.423 "enable_placement_id": 0, 00:23:49.423 "enable_zerocopy_send_server": true, 00:23:49.423 "enable_zerocopy_send_client": false, 00:23:49.423 "zerocopy_threshold": 0, 00:23:49.423 "tls_version": 0, 00:23:49.423 "enable_ktls": false 00:23:49.423 } 00:23:49.423 } 00:23:49.423 ] 00:23:49.423 }, 00:23:49.423 { 00:23:49.423 "subsystem": "vmd", 00:23:49.423 "config": [] 00:23:49.423 }, 00:23:49.423 { 00:23:49.423 "subsystem": "accel", 00:23:49.423 "config": [ 00:23:49.423 { 00:23:49.423 "method": "accel_set_options", 00:23:49.423 "params": { 00:23:49.423 "small_cache_size": 128, 00:23:49.423 "large_cache_size": 16, 00:23:49.423 "task_count": 2048, 00:23:49.423 "sequence_count": 2048, 00:23:49.423 "buf_count": 2048 00:23:49.423 } 00:23:49.423 } 00:23:49.423 ] 00:23:49.423 }, 00:23:49.423 { 00:23:49.423 "subsystem": "bdev", 00:23:49.423 "config": [ 00:23:49.423 { 00:23:49.423 "method": "bdev_set_options", 00:23:49.423 "params": { 00:23:49.423 "bdev_io_pool_size": 65535, 00:23:49.423 "bdev_io_cache_size": 256, 00:23:49.423 "bdev_auto_examine": true, 00:23:49.423 "iobuf_small_cache_size": 128, 00:23:49.423 "iobuf_large_cache_size": 16 00:23:49.423 } 00:23:49.423 }, 00:23:49.423 { 00:23:49.423 "method": "bdev_raid_set_options", 00:23:49.423 "params": { 00:23:49.424 "process_window_size_kb": 1024, 00:23:49.424 "process_max_bandwidth_mb_sec": 0 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "bdev_iscsi_set_options", 00:23:49.424 "params": { 00:23:49.424 "timeout_sec": 30 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "bdev_nvme_set_options", 00:23:49.424 "params": { 00:23:49.424 "action_on_timeout": "none", 00:23:49.424 "timeout_us": 0, 00:23:49.424 "timeout_admin_us": 0, 00:23:49.424 "keep_alive_timeout_ms": 10000, 00:23:49.424 "arbitration_burst": 0, 00:23:49.424 "low_priority_weight": 0, 00:23:49.424 "medium_priority_weight": 0, 00:23:49.424 "high_priority_weight": 0, 00:23:49.424 "nvme_adminq_poll_period_us": 10000, 00:23:49.424 "nvme_ioq_poll_period_us": 0, 00:23:49.424 "io_queue_requests": 0, 00:23:49.424 "delay_cmd_submit": true, 00:23:49.424 "transport_retry_count": 4, 00:23:49.424 "bdev_retry_count": 3, 00:23:49.424 "transport_ack_timeout": 0, 00:23:49.424 "ctrlr_loss_timeout_sec": 0, 00:23:49.424 "reconnect_delay_sec": 0, 00:23:49.424 "fast_io_fail_timeout_sec": 0, 00:23:49.424 "disable_auto_failback": false, 00:23:49.424 "generate_uuids": false, 00:23:49.424 "transport_tos": 0, 00:23:49.424 "nvme_error_stat": false, 00:23:49.424 "rdma_srq_size": 0, 00:23:49.424 "io_path_stat": false, 00:23:49.424 "allow_accel_sequence": false, 00:23:49.424 "rdma_max_cq_size": 0, 00:23:49.424 "rdma_cm_event_timeout_ms": 0, 00:23:49.424 "dhchap_digests": [ 00:23:49.424 "sha256", 00:23:49.424 "sha384", 00:23:49.424 "sha512" 00:23:49.424 ], 00:23:49.424 "dhchap_dhgroups": [ 00:23:49.424 "null", 00:23:49.424 "ffdhe2048", 00:23:49.424 "ffdhe3072", 00:23:49.424 "ffdhe4096", 00:23:49.424 "ffdhe6144", 00:23:49.424 "ffdhe8192" 00:23:49.424 ] 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "bdev_nvme_set_hotplug", 00:23:49.424 "params": { 00:23:49.424 "period_us": 100000, 00:23:49.424 "enable": false 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "bdev_malloc_create", 00:23:49.424 "params": { 00:23:49.424 "name": "malloc0", 00:23:49.424 "num_blocks": 8192, 00:23:49.424 "block_size": 4096, 00:23:49.424 "physical_block_size": 4096, 00:23:49.424 "uuid": "3a31db75-60d0-45ee-b419-c4455d03525e", 00:23:49.424 "optimal_io_boundary": 0, 00:23:49.424 "md_size": 0, 00:23:49.424 "dif_type": 0, 00:23:49.424 "dif_is_head_of_md": false, 00:23:49.424 "dif_pi_format": 0 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "bdev_wait_for_examine" 00:23:49.424 } 00:23:49.424 ] 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "subsystem": "nbd", 00:23:49.424 "config": [] 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "subsystem": "scheduler", 00:23:49.424 "config": [ 00:23:49.424 { 00:23:49.424 "method": "framework_set_scheduler", 00:23:49.424 "params": { 00:23:49.424 "name": "static" 00:23:49.424 } 00:23:49.424 } 00:23:49.424 ] 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "subsystem": "nvmf", 00:23:49.424 "config": [ 00:23:49.424 { 00:23:49.424 "method": "nvmf_set_config", 00:23:49.424 "params": { 00:23:49.424 "discovery_filter": "match_any", 00:23:49.424 "admin_cmd_passthru": { 00:23:49.424 "identify_ctrlr": false 00:23:49.424 } 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "nvmf_set_max_subsystems", 00:23:49.424 "params": { 00:23:49.424 "max_subsystems": 1024 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "nvmf_set_crdt", 00:23:49.424 "params": { 00:23:49.424 "crdt1": 0, 00:23:49.424 "crdt2": 0, 00:23:49.424 "crdt3": 0 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "nvmf_create_transport", 00:23:49.424 "params": { 00:23:49.424 "trtype": "TCP", 00:23:49.424 "max_queue_depth": 128, 00:23:49.424 "max_io_qpairs_per_ctrlr": 127, 00:23:49.424 "in_capsule_data_size": 4096, 00:23:49.424 "max_io_size": 131072, 00:23:49.424 "io_unit_size": 131072, 00:23:49.424 "max_aq_depth": 128, 00:23:49.424 "num_shared_buffers": 511, 00:23:49.424 "buf_cache_size": 4294967295, 00:23:49.424 "dif_insert_or_strip": false, 00:23:49.424 "zcopy": false, 00:23:49.424 "c2h_success": false, 00:23:49.424 "sock_priority": 0, 00:23:49.424 "abort_timeout_sec": 1, 00:23:49.424 "ack_timeout": 0, 00:23:49.424 "data_wr_pool_size": 0 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "nvmf_create_subsystem", 00:23:49.424 "params": { 00:23:49.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.424 "allow_any_host": false, 00:23:49.424 "serial_number": "00000000000000000000", 00:23:49.424 "model_number": "SPDK bdev Controller", 00:23:49.424 "max_namespaces": 32, 00:23:49.424 "min_cntlid": 1, 00:23:49.424 "max_cntlid": 65519, 00:23:49.424 "ana_reporting": false 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "nvmf_subsystem_add_host", 00:23:49.424 "params": { 00:23:49.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.424 "host": "nqn.2016-06.io.spdk:host1", 00:23:49.424 "psk": "key0" 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "nvmf_subsystem_add_ns", 00:23:49.424 "params": { 00:23:49.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.424 "namespace": { 00:23:49.424 "nsid": 1, 00:23:49.424 "bdev_name": "malloc0", 00:23:49.424 "nguid": "3A31DB7560D045EEB419C4455D03525E", 00:23:49.424 "uuid": "3a31db75-60d0-45ee-b419-c4455d03525e", 00:23:49.424 "no_auto_visible": false 00:23:49.424 } 00:23:49.424 } 00:23:49.424 }, 00:23:49.424 { 00:23:49.424 "method": "nvmf_subsystem_add_listener", 00:23:49.424 "params": { 00:23:49.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.424 "listen_address": { 00:23:49.424 "trtype": "TCP", 00:23:49.424 "adrfam": "IPv4", 00:23:49.424 "traddr": "10.0.0.2", 00:23:49.424 "trsvcid": "4420" 00:23:49.424 }, 00:23:49.424 "secure_channel": false, 00:23:49.424 "sock_impl": "ssl" 00:23:49.424 } 00:23:49.424 } 00:23:49.424 ] 00:23:49.424 } 00:23:49.424 ] 00:23:49.424 }' 00:23:49.424 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:49.683 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:49.683 "subsystems": [ 00:23:49.683 { 00:23:49.683 "subsystem": "keyring", 00:23:49.683 "config": [ 00:23:49.683 { 00:23:49.683 "method": "keyring_file_add_key", 00:23:49.683 "params": { 00:23:49.683 "name": "key0", 00:23:49.683 "path": "/tmp/tmp.wcSOV3rOi5" 00:23:49.683 } 00:23:49.683 } 00:23:49.683 ] 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "subsystem": "iobuf", 00:23:49.683 "config": [ 00:23:49.683 { 00:23:49.683 "method": "iobuf_set_options", 00:23:49.683 "params": { 00:23:49.683 "small_pool_count": 8192, 00:23:49.683 "large_pool_count": 1024, 00:23:49.683 "small_bufsize": 8192, 00:23:49.683 "large_bufsize": 135168 00:23:49.683 } 00:23:49.683 } 00:23:49.683 ] 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "subsystem": "sock", 00:23:49.683 "config": [ 00:23:49.683 { 00:23:49.683 "method": "sock_set_default_impl", 00:23:49.683 "params": { 00:23:49.683 "impl_name": "posix" 00:23:49.683 } 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "method": "sock_impl_set_options", 00:23:49.683 "params": { 00:23:49.683 "impl_name": "ssl", 00:23:49.683 "recv_buf_size": 4096, 00:23:49.683 "send_buf_size": 4096, 00:23:49.683 "enable_recv_pipe": true, 00:23:49.683 "enable_quickack": false, 00:23:49.683 "enable_placement_id": 0, 00:23:49.683 "enable_zerocopy_send_server": true, 00:23:49.683 "enable_zerocopy_send_client": false, 00:23:49.683 "zerocopy_threshold": 0, 00:23:49.683 "tls_version": 0, 00:23:49.683 "enable_ktls": false 00:23:49.683 } 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "method": "sock_impl_set_options", 00:23:49.683 "params": { 00:23:49.683 "impl_name": "posix", 00:23:49.683 "recv_buf_size": 2097152, 00:23:49.683 "send_buf_size": 2097152, 00:23:49.683 "enable_recv_pipe": true, 00:23:49.683 "enable_quickack": false, 00:23:49.683 "enable_placement_id": 0, 00:23:49.683 "enable_zerocopy_send_server": true, 00:23:49.683 "enable_zerocopy_send_client": false, 00:23:49.683 "zerocopy_threshold": 0, 00:23:49.683 "tls_version": 0, 00:23:49.683 "enable_ktls": false 00:23:49.683 } 00:23:49.683 } 00:23:49.683 ] 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "subsystem": "vmd", 00:23:49.683 "config": [] 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "subsystem": "accel", 00:23:49.683 "config": [ 00:23:49.683 { 00:23:49.683 "method": "accel_set_options", 00:23:49.683 "params": { 00:23:49.683 "small_cache_size": 128, 00:23:49.683 "large_cache_size": 16, 00:23:49.683 "task_count": 2048, 00:23:49.683 "sequence_count": 2048, 00:23:49.683 "buf_count": 2048 00:23:49.683 } 00:23:49.683 } 00:23:49.683 ] 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "subsystem": "bdev", 00:23:49.683 "config": [ 00:23:49.683 { 00:23:49.683 "method": "bdev_set_options", 00:23:49.683 "params": { 00:23:49.683 "bdev_io_pool_size": 65535, 00:23:49.683 "bdev_io_cache_size": 256, 00:23:49.683 "bdev_auto_examine": true, 00:23:49.683 "iobuf_small_cache_size": 128, 00:23:49.683 "iobuf_large_cache_size": 16 00:23:49.683 } 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "method": "bdev_raid_set_options", 00:23:49.683 "params": { 00:23:49.683 "process_window_size_kb": 1024, 00:23:49.683 "process_max_bandwidth_mb_sec": 0 00:23:49.683 } 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "method": "bdev_iscsi_set_options", 00:23:49.683 "params": { 00:23:49.683 "timeout_sec": 30 00:23:49.683 } 00:23:49.683 }, 00:23:49.683 { 00:23:49.683 "method": "bdev_nvme_set_options", 00:23:49.683 "params": { 00:23:49.683 "action_on_timeout": "none", 00:23:49.683 "timeout_us": 0, 00:23:49.683 "timeout_admin_us": 0, 00:23:49.683 "keep_alive_timeout_ms": 10000, 00:23:49.683 "arbitration_burst": 0, 00:23:49.683 "low_priority_weight": 0, 00:23:49.683 "medium_priority_weight": 0, 00:23:49.683 "high_priority_weight": 0, 00:23:49.683 "nvme_adminq_poll_period_us": 10000, 00:23:49.683 "nvme_ioq_poll_period_us": 0, 00:23:49.683 "io_queue_requests": 512, 00:23:49.683 "delay_cmd_submit": true, 00:23:49.683 "transport_retry_count": 4, 00:23:49.684 "bdev_retry_count": 3, 00:23:49.684 "transport_ack_timeout": 0, 00:23:49.684 "ctrlr_loss_timeout_sec": 0, 00:23:49.684 "reconnect_delay_sec": 0, 00:23:49.684 "fast_io_fail_timeout_sec": 0, 00:23:49.684 "disable_auto_failback": false, 00:23:49.684 "generate_uuids": false, 00:23:49.684 "transport_tos": 0, 00:23:49.684 "nvme_error_stat": false, 00:23:49.684 "rdma_srq_size": 0, 00:23:49.684 "io_path_stat": false, 00:23:49.684 "allow_accel_sequence": false, 00:23:49.684 "rdma_max_cq_size": 0, 00:23:49.684 "rdma_cm_event_timeout_ms": 0, 00:23:49.684 "dhchap_digests": [ 00:23:49.684 "sha256", 00:23:49.684 "sha384", 00:23:49.684 "sha512" 00:23:49.684 ], 00:23:49.684 "dhchap_dhgroups": [ 00:23:49.684 "null", 00:23:49.684 "ffdhe2048", 00:23:49.684 "ffdhe3072", 00:23:49.684 "ffdhe4096", 00:23:49.684 "ffdhe6144", 00:23:49.684 "ffdhe8192" 00:23:49.684 ] 00:23:49.684 } 00:23:49.684 }, 00:23:49.684 { 00:23:49.684 "method": "bdev_nvme_attach_controller", 00:23:49.684 "params": { 00:23:49.684 "name": "nvme0", 00:23:49.684 "trtype": "TCP", 00:23:49.684 "adrfam": "IPv4", 00:23:49.684 "traddr": "10.0.0.2", 00:23:49.684 "trsvcid": "4420", 00:23:49.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.684 "prchk_reftag": false, 00:23:49.684 "prchk_guard": false, 00:23:49.684 "ctrlr_loss_timeout_sec": 0, 00:23:49.684 "reconnect_delay_sec": 0, 00:23:49.684 "fast_io_fail_timeout_sec": 0, 00:23:49.684 "psk": "key0", 00:23:49.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.684 "hdgst": false, 00:23:49.684 "ddgst": false 00:23:49.684 } 00:23:49.684 }, 00:23:49.684 { 00:23:49.684 "method": "bdev_nvme_set_hotplug", 00:23:49.684 "params": { 00:23:49.684 "period_us": 100000, 00:23:49.684 "enable": false 00:23:49.684 } 00:23:49.684 }, 00:23:49.684 { 00:23:49.684 "method": "bdev_enable_histogram", 00:23:49.684 "params": { 00:23:49.684 "name": "nvme0n1", 00:23:49.684 "enable": true 00:23:49.684 } 00:23:49.684 }, 00:23:49.684 { 00:23:49.684 "method": "bdev_wait_for_examine" 00:23:49.684 } 00:23:49.684 ] 00:23:49.684 }, 00:23:49.684 { 00:23:49.684 "subsystem": "nbd", 00:23:49.684 "config": [] 00:23:49.684 } 00:23:49.684 ] 00:23:49.684 }' 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2560214 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2560214 ']' 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2560214 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2560214 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2560214' 00:23:49.684 killing process with pid 2560214 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2560214 00:23:49.684 Received shutdown signal, test time was about 1.000000 seconds 00:23:49.684 00:23:49.684 Latency(us) 00:23:49.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.684 =================================================================================================================== 00:23:49.684 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.684 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2560214 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2560116 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2560116 ']' 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2560116 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2560116 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2560116' 00:23:49.942 killing process with pid 2560116 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2560116 00:23:49.942 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2560116 00:23:50.201 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:50.201 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.201 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:50.201 "subsystems": [ 00:23:50.201 { 00:23:50.201 "subsystem": "keyring", 00:23:50.201 "config": [ 00:23:50.201 { 00:23:50.201 "method": "keyring_file_add_key", 00:23:50.201 "params": { 00:23:50.201 "name": "key0", 00:23:50.201 "path": "/tmp/tmp.wcSOV3rOi5" 00:23:50.201 } 00:23:50.201 } 00:23:50.201 ] 00:23:50.201 }, 00:23:50.201 { 00:23:50.201 "subsystem": "iobuf", 00:23:50.201 "config": [ 00:23:50.201 { 00:23:50.201 "method": "iobuf_set_options", 00:23:50.201 "params": { 00:23:50.201 "small_pool_count": 8192, 00:23:50.201 "large_pool_count": 1024, 00:23:50.201 "small_bufsize": 8192, 00:23:50.201 "large_bufsize": 135168 00:23:50.201 } 00:23:50.201 } 00:23:50.201 ] 00:23:50.201 }, 00:23:50.201 { 00:23:50.201 "subsystem": "sock", 00:23:50.201 "config": [ 00:23:50.201 { 00:23:50.201 "method": "sock_set_default_impl", 00:23:50.201 "params": { 00:23:50.201 "impl_name": "posix" 00:23:50.201 } 00:23:50.201 }, 00:23:50.201 { 00:23:50.201 "method": "sock_impl_set_options", 00:23:50.201 "params": { 00:23:50.201 "impl_name": "ssl", 00:23:50.201 "recv_buf_size": 4096, 00:23:50.201 "send_buf_size": 4096, 00:23:50.201 "enable_recv_pipe": true, 00:23:50.201 "enable_quickack": false, 00:23:50.201 "enable_placement_id": 0, 00:23:50.201 "enable_zerocopy_send_server": true, 00:23:50.201 "enable_zerocopy_send_client": false, 00:23:50.201 "zerocopy_threshold": 0, 00:23:50.201 "tls_version": 0, 00:23:50.201 "enable_ktls": false 00:23:50.201 } 00:23:50.201 }, 00:23:50.201 { 00:23:50.201 "method": "sock_impl_set_options", 00:23:50.201 "params": { 00:23:50.201 "impl_name": "posix", 00:23:50.201 "recv_buf_size": 2097152, 00:23:50.201 "send_buf_size": 2097152, 00:23:50.201 "enable_recv_pipe": true, 00:23:50.201 "enable_quickack": false, 00:23:50.201 "enable_placement_id": 0, 00:23:50.201 "enable_zerocopy_send_server": true, 00:23:50.201 "enable_zerocopy_send_client": false, 00:23:50.201 "zerocopy_threshold": 0, 00:23:50.202 "tls_version": 0, 00:23:50.202 "enable_ktls": false 00:23:50.202 } 00:23:50.202 } 00:23:50.202 ] 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "subsystem": "vmd", 00:23:50.202 "config": [] 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "subsystem": "accel", 00:23:50.202 "config": [ 00:23:50.202 { 00:23:50.202 "method": "accel_set_options", 00:23:50.202 "params": { 00:23:50.202 "small_cache_size": 128, 00:23:50.202 "large_cache_size": 16, 00:23:50.202 "task_count": 2048, 00:23:50.202 "sequence_count": 2048, 00:23:50.202 "buf_count": 2048 00:23:50.202 } 00:23:50.202 } 00:23:50.202 ] 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "subsystem": "bdev", 00:23:50.202 "config": [ 00:23:50.202 { 00:23:50.202 "method": "bdev_set_options", 00:23:50.202 "params": { 00:23:50.202 "bdev_io_pool_size": 65535, 00:23:50.202 "bdev_io_cache_size": 256, 00:23:50.202 "bdev_auto_examine": true, 00:23:50.202 "iobuf_small_cache_size": 128, 00:23:50.202 "iobuf_large_cache_size": 16 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "bdev_raid_set_options", 00:23:50.202 "params": { 00:23:50.202 "process_window_size_kb": 1024, 00:23:50.202 "process_max_bandwidth_mb_sec": 0 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "bdev_iscsi_set_options", 00:23:50.202 "params": { 00:23:50.202 "timeout_sec": 30 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "bdev_nvme_set_options", 00:23:50.202 "params": { 00:23:50.202 "action_on_timeout": "none", 00:23:50.202 "timeout_us": 0, 00:23:50.202 "timeout_admin_us": 0, 00:23:50.202 "keep_alive_timeout_ms": 10000, 00:23:50.202 "arbitration_burst": 0, 00:23:50.202 "low_priority_weight": 0, 00:23:50.202 "medium_priority_weight": 0, 00:23:50.202 "high_priority_weight": 0, 00:23:50.202 "nvme_adminq_poll_period_us": 10000, 00:23:50.202 "nvme_ioq_poll_period_us": 0, 00:23:50.202 "io_queue_requests": 0, 00:23:50.202 "delay_cmd_submit": true, 00:23:50.202 "transport_retry_count": 4, 00:23:50.202 "bdev_retry_count": 3, 00:23:50.202 "transport_ack_timeout": 0, 00:23:50.202 "ctrlr_loss_timeout_sec": 0, 00:23:50.202 "reconnect_delay_sec": 0, 00:23:50.202 "fast_io_fail_timeout_sec": 0, 00:23:50.202 "disable_auto_failback": false, 00:23:50.202 "generate_uuids": false, 00:23:50.202 "transport_tos": 0, 00:23:50.202 "nvme_error_stat": false, 00:23:50.202 "rdma_srq_size": 0, 00:23:50.202 "io_path_stat": false, 00:23:50.202 "allow_accel_sequence": false, 00:23:50.202 "rdma_max_cq_size": 0, 00:23:50.202 "rdma_cm_event_timeout_ms": 0, 00:23:50.202 "dhchap_digests": [ 00:23:50.202 "sha256", 00:23:50.202 "sha384", 00:23:50.202 "sha512" 00:23:50.202 ], 00:23:50.202 "dhchap_dhgroups": [ 00:23:50.202 "null", 00:23:50.202 "ffdhe2048", 00:23:50.202 "ffdhe3072", 00:23:50.202 "ffdhe4096", 00:23:50.202 "ffdhe6144", 00:23:50.202 "ffdhe8192" 00:23:50.202 ] 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "bdev_nvme_set_hotplug", 00:23:50.202 "params": { 00:23:50.202 "period_us": 100000, 00:23:50.202 "enable": false 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "bdev_malloc_create", 00:23:50.202 "params": { 00:23:50.202 "name": "malloc0", 00:23:50.202 "num_blocks": 8192, 00:23:50.202 "block_size": 4096, 00:23:50.202 "physical_block_size": 4096, 00:23:50.202 "uuid": "3a31db75-60d0-45ee-b419-c4455d03525e", 00:23:50.202 "optimal_io_boundary": 0, 00:23:50.202 "md_size": 0, 00:23:50.202 "dif_type": 0, 00:23:50.202 "dif_is_head_of_md": false, 00:23:50.202 "dif_pi_format": 0 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "bdev_wait_for_examine" 00:23:50.202 } 00:23:50.202 ] 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "subsystem": "nbd", 00:23:50.202 "config": [] 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "subsystem": "scheduler", 00:23:50.202 "config": [ 00:23:50.202 { 00:23:50.202 "method": "framework_set_scheduler", 00:23:50.202 "params": { 00:23:50.202 "name": "static" 00:23:50.202 } 00:23:50.202 } 00:23:50.202 ] 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "subsystem": "nvmf", 00:23:50.202 "config": [ 00:23:50.202 { 00:23:50.202 "method": "nvmf_set_config", 00:23:50.202 "params": { 00:23:50.202 "discovery_filter": "match_any", 00:23:50.202 "admin_cmd_passthru": { 00:23:50.202 "identify_ctrlr": false 00:23:50.202 } 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "nvmf_set_max_subsystems", 00:23:50.202 "params": { 00:23:50.202 "max_subsystems": 1024 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "nvmf_set_crdt", 00:23:50.202 "params": { 00:23:50.202 "crdt1": 0, 00:23:50.202 "crdt2": 0, 00:23:50.202 "crdt3": 0 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "nvmf_create_transport", 00:23:50.202 "params": { 00:23:50.202 "trtype": "TCP", 00:23:50.202 "max_queue_depth": 128, 00:23:50.202 "max_io_qpairs_per_ctrlr": 127, 00:23:50.202 "in_capsule_data_size": 4096, 00:23:50.202 "max_io_size": 131072, 00:23:50.202 "io_unit_size": 131072, 00:23:50.202 "max_aq_depth": 128, 00:23:50.202 "num_shared_buffers": 511, 00:23:50.202 "buf_cache_size": 4294967295, 00:23:50.202 "dif_insert_or_strip": false, 00:23:50.202 "zcopy": false, 00:23:50.202 "c2h_success": false, 00:23:50.202 "sock_priority": 0, 00:23:50.202 "abort_timeout_sec": 1, 00:23:50.202 "ack_timeout": 0, 00:23:50.202 "data_wr_pool_size": 0 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "nvmf_create_subsystem", 00:23:50.202 "params": { 00:23:50.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.202 "allow_any_host": false, 00:23:50.202 "serial_number": "00000000000000000000", 00:23:50.202 "model_number": "SPDK bdev Controller", 00:23:50.202 "max_namespaces": 32, 00:23:50.202 "min_cntlid": 1, 00:23:50.202 "max_cntlid": 65519, 00:23:50.202 "ana_reporting": false 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "nvmf_subsystem_add_host", 00:23:50.202 "params": { 00:23:50.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.202 "host": "nqn.2016-06.io.spdk:host1", 00:23:50.202 "psk": "key0" 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "nvmf_subsystem_add_ns", 00:23:50.202 "params": { 00:23:50.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.202 "namespace": { 00:23:50.202 "nsid": 1, 00:23:50.202 "bdev_name": "malloc0", 00:23:50.202 "nguid": "3A31DB7560D045EEB419C4455D03525E", 00:23:50.202 "uuid": "3a31db75-60d0-45ee-b419-c4455d03525e", 00:23:50.202 "no_auto_visible": false 00:23:50.202 } 00:23:50.202 } 00:23:50.202 }, 00:23:50.202 { 00:23:50.202 "method": "nvmf_subsystem_add_listener", 00:23:50.202 "params": { 00:23:50.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.202 "listen_address": { 00:23:50.202 "trtype": "TCP", 00:23:50.202 "adrfam": "IPv4", 00:23:50.202 "traddr": "10.0.0.2", 00:23:50.202 "trsvcid": "4420" 00:23:50.202 }, 00:23:50.202 "secure_channel": false, 00:23:50.202 "sock_impl": "ssl" 00:23:50.202 } 00:23:50.202 } 00:23:50.202 ] 00:23:50.202 } 00:23:50.202 ] 00:23:50.202 }' 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2560630 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2560630 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2560630 ']' 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.202 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.202 [2024-07-26 14:18:07.000799] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:50.203 [2024-07-26 14:18:07.000900] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.203 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.203 [2024-07-26 14:18:07.076273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.461 [2024-07-26 14:18:07.200755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.461 [2024-07-26 14:18:07.200832] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.461 [2024-07-26 14:18:07.200849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.461 [2024-07-26 14:18:07.200862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.461 [2024-07-26 14:18:07.200873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.461 [2024-07-26 14:18:07.200966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.719 [2024-07-26 14:18:07.448924] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.719 [2024-07-26 14:18:07.502252] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.719 [2024-07-26 14:18:07.502551] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2560784 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2560784 /var/tmp/bdevperf.sock 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2560784 ']' 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.288 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:51.288 "subsystems": [ 00:23:51.288 { 00:23:51.288 "subsystem": "keyring", 00:23:51.288 "config": [ 00:23:51.288 { 00:23:51.288 "method": "keyring_file_add_key", 00:23:51.288 "params": { 00:23:51.288 "name": "key0", 00:23:51.288 "path": "/tmp/tmp.wcSOV3rOi5" 00:23:51.288 } 00:23:51.288 } 00:23:51.288 ] 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "subsystem": "iobuf", 00:23:51.288 "config": [ 00:23:51.288 { 00:23:51.288 "method": "iobuf_set_options", 00:23:51.288 "params": { 00:23:51.288 "small_pool_count": 8192, 00:23:51.288 "large_pool_count": 1024, 00:23:51.288 "small_bufsize": 8192, 00:23:51.288 "large_bufsize": 135168 00:23:51.288 } 00:23:51.288 } 00:23:51.288 ] 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "subsystem": "sock", 00:23:51.288 "config": [ 00:23:51.288 { 00:23:51.288 "method": "sock_set_default_impl", 00:23:51.288 "params": { 00:23:51.288 "impl_name": "posix" 00:23:51.288 } 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "method": "sock_impl_set_options", 00:23:51.288 "params": { 00:23:51.288 "impl_name": "ssl", 00:23:51.288 "recv_buf_size": 4096, 00:23:51.288 "send_buf_size": 4096, 00:23:51.288 "enable_recv_pipe": true, 00:23:51.288 "enable_quickack": false, 00:23:51.288 "enable_placement_id": 0, 00:23:51.288 "enable_zerocopy_send_server": true, 00:23:51.288 "enable_zerocopy_send_client": false, 00:23:51.288 "zerocopy_threshold": 0, 00:23:51.288 "tls_version": 0, 00:23:51.288 "enable_ktls": false 00:23:51.288 } 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "method": "sock_impl_set_options", 00:23:51.288 "params": { 00:23:51.288 "impl_name": "posix", 00:23:51.288 "recv_buf_size": 2097152, 00:23:51.288 "send_buf_size": 2097152, 00:23:51.288 "enable_recv_pipe": true, 00:23:51.288 "enable_quickack": false, 00:23:51.288 "enable_placement_id": 0, 00:23:51.288 "enable_zerocopy_send_server": true, 00:23:51.288 "enable_zerocopy_send_client": false, 00:23:51.288 "zerocopy_threshold": 0, 00:23:51.288 "tls_version": 0, 00:23:51.288 "enable_ktls": false 00:23:51.288 } 00:23:51.288 } 00:23:51.288 ] 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "subsystem": "vmd", 00:23:51.288 "config": [] 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "subsystem": "accel", 00:23:51.288 "config": [ 00:23:51.288 { 00:23:51.288 "method": "accel_set_options", 00:23:51.288 "params": { 00:23:51.288 "small_cache_size": 128, 00:23:51.288 "large_cache_size": 16, 00:23:51.288 "task_count": 2048, 00:23:51.288 "sequence_count": 2048, 00:23:51.288 "buf_count": 2048 00:23:51.288 } 00:23:51.288 } 00:23:51.288 ] 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "subsystem": "bdev", 00:23:51.288 "config": [ 00:23:51.288 { 00:23:51.288 "method": "bdev_set_options", 00:23:51.288 "params": { 00:23:51.288 "bdev_io_pool_size": 65535, 00:23:51.288 "bdev_io_cache_size": 256, 00:23:51.288 "bdev_auto_examine": true, 00:23:51.288 "iobuf_small_cache_size": 128, 00:23:51.288 "iobuf_large_cache_size": 16 00:23:51.288 } 00:23:51.288 }, 00:23:51.288 { 00:23:51.288 "method": "bdev_raid_set_options", 00:23:51.288 "params": { 00:23:51.289 "process_window_size_kb": 1024, 00:23:51.289 "process_max_bandwidth_mb_sec": 0 00:23:51.289 } 00:23:51.289 }, 00:23:51.289 { 00:23:51.289 "method": "bdev_iscsi_set_options", 00:23:51.289 "params": { 00:23:51.289 "timeout_sec": 30 00:23:51.289 } 00:23:51.289 }, 00:23:51.289 { 00:23:51.289 "method": "bdev_nvme_set_options", 00:23:51.289 "params": { 00:23:51.289 "action_on_timeout": "none", 00:23:51.289 "timeout_us": 0, 00:23:51.289 "timeout_admin_us": 0, 00:23:51.289 "keep_alive_timeout_ms": 10000, 00:23:51.289 "arbitration_burst": 0, 00:23:51.289 "low_priority_weight": 0, 00:23:51.289 "medium_priority_weight": 0, 00:23:51.289 "high_priority_weight": 0, 00:23:51.289 "nvme_adminq_poll_period_us": 10000, 00:23:51.289 "nvme_ioq_poll_period_us": 0, 00:23:51.289 "io_queue_requests": 512, 00:23:51.289 "delay_cmd_submit": true, 00:23:51.289 "transport_retry_count": 4, 00:23:51.289 "bdev_retry_count": 3, 00:23:51.289 "transport_ack_timeout": 0, 00:23:51.289 "ctrlr_loss_timeout_sec": 0, 00:23:51.289 "reconnect_delay_sec": 0, 00:23:51.289 "fast_io_fail_timeout_sec": 0, 00:23:51.289 "disable_auto_failback": false, 00:23:51.289 "generate_uuids": false, 00:23:51.289 "transport_tos": 0, 00:23:51.289 "nvme_error_stat": false, 00:23:51.289 "rdma_srq_size": 0, 00:23:51.289 "io_path_stat": false, 00:23:51.289 "allow_accel_sequence": false, 00:23:51.289 "rdma_max_cq_size": 0, 00:23:51.289 "rdma_cm_event_timeout_ms": 0, 00:23:51.289 "dhchap_digests": [ 00:23:51.289 "sha256", 00:23:51.289 "sha384", 00:23:51.289 "sha512" 00:23:51.289 ], 00:23:51.289 "dhchap_dhgroups": [ 00:23:51.289 "null", 00:23:51.289 "ffdhe2048", 00:23:51.289 "ffdhe3072", 00:23:51.289 "ffdhe4096", 00:23:51.289 "ffdhe6144", 00:23:51.289 "ffdhe8192" 00:23:51.289 ] 00:23:51.289 } 00:23:51.289 }, 00:23:51.289 { 00:23:51.289 "method": "bdev_nvme_attach_controller", 00:23:51.289 "params": { 00:23:51.289 "name": "nvme0", 00:23:51.289 "trtype": "TCP", 00:23:51.289 "adrfam": "IPv4", 00:23:51.289 "traddr": "10.0.0.2", 00:23:51.289 "trsvcid": "4420", 00:23:51.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.289 "prchk_reftag": false, 00:23:51.289 "prchk_guard": false, 00:23:51.289 "ctrlr_loss_timeout_sec": 0, 00:23:51.289 "reconnect_delay_sec": 0, 00:23:51.289 "fast_io_fail_timeout_sec": 0, 00:23:51.289 "psk": "key0", 00:23:51.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.289 "hdgst": false, 00:23:51.289 "ddgst": false 00:23:51.289 } 00:23:51.289 }, 00:23:51.289 { 00:23:51.289 "method": "bdev_nvme_set_hotplug", 00:23:51.289 "params": { 00:23:51.289 "period_us": 100000, 00:23:51.289 "enable": false 00:23:51.289 } 00:23:51.289 }, 00:23:51.289 { 00:23:51.289 "method": "bdev_enable_histogram", 00:23:51.289 "params": { 00:23:51.289 "name": "nvme0n1", 00:23:51.289 "enable": true 00:23:51.289 } 00:23:51.289 }, 00:23:51.289 { 00:23:51.289 "method": "bdev_wait_for_examine" 00:23:51.289 } 00:23:51.289 ] 00:23:51.289 }, 00:23:51.289 { 00:23:51.289 "subsystem": "nbd", 00:23:51.289 "config": [] 00:23:51.289 } 00:23:51.289 ] 00:23:51.289 }' 00:23:51.289 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.289 [2024-07-26 14:18:08.094783] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:23:51.289 [2024-07-26 14:18:08.094861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560784 ] 00:23:51.289 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.289 [2024-07-26 14:18:08.157345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.547 [2024-07-26 14:18:08.280207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.805 [2024-07-26 14:18:08.464664] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.404 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.404 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:52.404 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:52.404 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.685 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.685 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.945 Running I/O for 1 seconds... 00:23:53.879 00:23:53.879 Latency(us) 00:23:53.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.879 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:53.879 Verification LBA range: start 0x0 length 0x2000 00:23:53.879 nvme0n1 : 1.05 2055.29 8.03 0.00 0.00 60927.34 7427.41 102527.43 00:23:53.879 =================================================================================================================== 00:23:53.879 Total : 2055.29 8.03 0.00 0.00 60927.34 7427.41 102527.43 00:23:53.879 0 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:53.879 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:53.879 nvmf_trace.0 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2560784 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2560784 ']' 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2560784 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2560784 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2560784' 00:23:54.138 killing process with pid 2560784 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2560784 00:23:54.138 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.138 00:23:54.138 Latency(us) 00:23:54.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.138 =================================================================================================================== 00:23:54.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.138 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2560784 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.396 rmmod nvme_tcp 00:23:54.396 rmmod nvme_fabrics 00:23:54.396 rmmod nvme_keyring 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2560630 ']' 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2560630 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2560630 ']' 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2560630 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2560630 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2560630' 00:23:54.396 killing process with pid 2560630 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2560630 00:23:54.396 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2560630 00:23:54.964 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.964 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.964 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.964 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.965 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.965 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.965 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.965 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.A7XRJ2WrQt /tmp/tmp.GHbDnOicIm /tmp/tmp.wcSOV3rOi5 00:23:56.870 00:23:56.870 real 1m30.825s 00:23:56.870 user 2m29.633s 00:23:56.870 sys 0m30.284s 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.870 ************************************ 00:23:56.870 END TEST nvmf_tls 00:23:56.870 ************************************ 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:56.870 ************************************ 00:23:56.870 START TEST nvmf_fips 00:23:56.870 ************************************ 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:56.870 * Looking for test storage... 00:23:56.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:56.870 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:57.130 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:57.131 Error setting digest 00:23:57.131 00021E2B9A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:57.131 00021E2B9A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:57.131 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.420 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:00.421 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:00.421 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:00.421 Found net devices under 0000:84:00.0: cvl_0_0 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:00.421 Found net devices under 0000:84:00.1: cvl_0_1 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:24:00.421 00:24:00.421 --- 10.0.0.2 ping statistics --- 00:24:00.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.421 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:00.421 00:24:00.421 --- 10.0.0.1 ping statistics --- 00:24:00.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.421 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.421 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2563789 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2563789 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2563789 ']' 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.422 [2024-07-26 14:18:17.069747] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:24:00.422 [2024-07-26 14:18:17.069873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.422 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.422 [2024-07-26 14:18:17.165932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.422 [2024-07-26 14:18:17.303253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.422 [2024-07-26 14:18:17.303321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.422 [2024-07-26 14:18:17.303342] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.422 [2024-07-26 14:18:17.303358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.422 [2024-07-26 14:18:17.303372] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.422 [2024-07-26 14:18:17.303416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.358 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:01.925 [2024-07-26 14:18:18.617278] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.925 [2024-07-26 14:18:18.633270] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.925 [2024-07-26 14:18:18.633573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.925 [2024-07-26 14:18:18.667949] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:01.925 malloc0 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2563950 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2563950 /var/tmp/bdevperf.sock 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2563950 ']' 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.925 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:02.184 [2024-07-26 14:18:18.821645] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:24:02.184 [2024-07-26 14:18:18.821756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563950 ] 00:24:02.184 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.184 [2024-07-26 14:18:18.899356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.184 [2024-07-26 14:18:19.046091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.443 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.443 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:02.443 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:03.010 [2024-07-26 14:18:19.751028] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.010 [2024-07-26 14:18:19.751203] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:03.010 TLSTESTn1 00:24:03.010 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.268 Running I/O for 10 seconds... 00:24:13.895 00:24:13.895 Latency(us) 00:24:13.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.896 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:13.896 Verification LBA range: start 0x0 length 0x2000 00:24:13.896 TLSTESTn1 : 10.04 2367.96 9.25 0.00 0.00 53913.60 8155.59 88546.42 00:24:13.896 =================================================================================================================== 00:24:13.896 Total : 2367.96 9.25 0.00 0.00 53913.60 8155.59 88546.42 00:24:13.896 0 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:13.896 nvmf_trace.0 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2563950 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2563950 ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2563950 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2563950 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2563950' 00:24:13.896 killing process with pid 2563950 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2563950 00:24:13.896 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.896 00:24:13.896 Latency(us) 00:24:13.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.896 =================================================================================================================== 00:24:13.896 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.896 [2024-07-26 14:18:30.305334] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2563950 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.896 rmmod nvme_tcp 00:24:13.896 rmmod nvme_fabrics 00:24:13.896 rmmod nvme_keyring 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2563789 ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2563789 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2563789 ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2563789 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2563789 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2563789' 00:24:13.896 killing process with pid 2563789 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2563789 00:24:13.896 [2024-07-26 14:18:30.729777] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:13.896 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2563789 00:24:14.465 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.465 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.465 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.465 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.465 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.465 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.465 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.465 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:16.377 00:24:16.377 real 0m19.464s 00:24:16.377 user 0m24.264s 00:24:16.377 sys 0m7.580s 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.377 ************************************ 00:24:16.377 END TEST nvmf_fips 00:24:16.377 ************************************ 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.377 14:18:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:18.905 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:18.905 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:18.905 Found net devices under 0000:84:00.0: cvl_0_0 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:18.905 Found net devices under 0000:84:00.1: cvl_0_1 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:18.905 ************************************ 00:24:18.905 START TEST nvmf_perf_adq 00:24:18.905 ************************************ 00:24:18.905 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:19.164 * Looking for test storage... 00:24:19.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.164 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.697 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:21.698 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:21.698 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:21.698 Found net devices under 0000:84:00.0: cvl_0_0 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:21.698 Found net devices under 0000:84:00.1: cvl_0_1 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:24:21.698 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:24:22.633 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:24:24.536 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:29.811 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:29.811 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:29.811 Found net devices under 0000:84:00.0: cvl_0_0 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:29.811 Found net devices under 0000:84:00.1: cvl_0_1 00:24:29.811 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:29.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:24:29.812 00:24:29.812 --- 10.0.0.2 ping statistics --- 00:24:29.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.812 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:24:29.812 00:24:29.812 --- 10.0.0.1 ping statistics --- 00:24:29.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.812 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2569976 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2569976 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2569976 ']' 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.812 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:29.812 [2024-07-26 14:18:46.650630] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:24:29.812 [2024-07-26 14:18:46.650718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.071 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.071 [2024-07-26 14:18:46.757077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.071 [2024-07-26 14:18:46.881896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.071 [2024-07-26 14:18:46.881956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.071 [2024-07-26 14:18:46.881973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.071 [2024-07-26 14:18:46.881987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.071 [2024-07-26 14:18:46.881999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.071 [2024-07-26 14:18:46.882102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.071 [2024-07-26 14:18:46.882194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.071 [2024-07-26 14:18:46.882264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.071 [2024-07-26 14:18:46.882268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:30.071 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.072 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.330 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.330 [2024-07-26 14:18:47.124797] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.330 Malloc1 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.330 [2024-07-26 14:18:47.177511] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2570001 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:24:30.330 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:30.330 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:24:32.860 "tick_rate": 2700000000, 00:24:32.860 "poll_groups": [ 00:24:32.860 { 00:24:32.860 "name": "nvmf_tgt_poll_group_000", 00:24:32.860 "admin_qpairs": 1, 00:24:32.860 "io_qpairs": 1, 00:24:32.860 "current_admin_qpairs": 1, 00:24:32.860 "current_io_qpairs": 1, 00:24:32.860 "pending_bdev_io": 0, 00:24:32.860 "completed_nvme_io": 17901, 00:24:32.860 "transports": [ 00:24:32.860 { 00:24:32.860 "trtype": "TCP" 00:24:32.860 } 00:24:32.860 ] 00:24:32.860 }, 00:24:32.860 { 00:24:32.860 "name": "nvmf_tgt_poll_group_001", 00:24:32.860 "admin_qpairs": 0, 00:24:32.860 "io_qpairs": 1, 00:24:32.860 "current_admin_qpairs": 0, 00:24:32.860 "current_io_qpairs": 1, 00:24:32.860 "pending_bdev_io": 0, 00:24:32.860 "completed_nvme_io": 18049, 00:24:32.860 "transports": [ 00:24:32.860 { 00:24:32.860 "trtype": "TCP" 00:24:32.860 } 00:24:32.860 ] 00:24:32.860 }, 00:24:32.860 { 00:24:32.860 "name": "nvmf_tgt_poll_group_002", 00:24:32.860 "admin_qpairs": 0, 00:24:32.860 "io_qpairs": 1, 00:24:32.860 "current_admin_qpairs": 0, 00:24:32.860 "current_io_qpairs": 1, 00:24:32.860 "pending_bdev_io": 0, 00:24:32.860 "completed_nvme_io": 18023, 00:24:32.860 "transports": [ 00:24:32.860 { 00:24:32.860 "trtype": "TCP" 00:24:32.860 } 00:24:32.860 ] 00:24:32.860 }, 00:24:32.860 { 00:24:32.860 "name": "nvmf_tgt_poll_group_003", 00:24:32.860 "admin_qpairs": 0, 00:24:32.860 "io_qpairs": 1, 00:24:32.860 "current_admin_qpairs": 0, 00:24:32.860 "current_io_qpairs": 1, 00:24:32.860 "pending_bdev_io": 0, 00:24:32.860 "completed_nvme_io": 17481, 00:24:32.860 "transports": [ 00:24:32.860 { 00:24:32.860 "trtype": "TCP" 00:24:32.860 } 00:24:32.860 ] 00:24:32.860 } 00:24:32.860 ] 00:24:32.860 }' 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:24:32.860 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2570001 00:24:40.974 Initializing NVMe Controllers 00:24:40.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:40.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:40.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:40.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:40.974 Initialization complete. Launching workers. 00:24:40.974 ======================================================== 00:24:40.974 Latency(us) 00:24:40.974 Device Information : IOPS MiB/s Average min max 00:24:40.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9450.73 36.92 6771.51 2039.59 10585.83 00:24:40.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9664.22 37.75 6624.08 3586.58 8783.76 00:24:40.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9690.62 37.85 6604.09 2583.93 10362.50 00:24:40.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9655.92 37.72 6627.89 2721.31 9362.10 00:24:40.974 ======================================================== 00:24:40.974 Total : 38461.50 150.24 6656.23 2039.59 10585.83 00:24:40.974 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:40.974 rmmod nvme_tcp 00:24:40.974 rmmod nvme_fabrics 00:24:40.974 rmmod nvme_keyring 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2569976 ']' 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2569976 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2569976 ']' 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2569976 00:24:40.974 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2569976 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2569976' 00:24:40.975 killing process with pid 2569976 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2569976 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2569976 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.975 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.560 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.560 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:24:43.560 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:24:43.820 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:24:46.355 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:51.626 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:51.627 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:51.627 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:51.627 Found net devices under 0000:84:00.0: cvl_0_0 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:51.627 Found net devices under 0000:84:00.1: cvl_0_1 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:51.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:24:51.627 00:24:51.627 --- 10.0.0.2 ping statistics --- 00:24:51.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.627 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:24:51.627 00:24:51.627 --- 10.0.0.1 ping statistics --- 00:24:51.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.627 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:51.627 net.core.busy_poll = 1 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:51.627 net.core.busy_read = 1 00:24:51.627 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:51.628 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:51.628 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2572617 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2572617 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2572617 ']' 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.628 [2024-07-26 14:19:08.141088] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:24:51.628 [2024-07-26 14:19:08.141258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.628 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.628 [2024-07-26 14:19:08.247909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.628 [2024-07-26 14:19:08.372292] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.628 [2024-07-26 14:19:08.372349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.628 [2024-07-26 14:19:08.372366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.628 [2024-07-26 14:19:08.372379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.628 [2024-07-26 14:19:08.372390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.628 [2024-07-26 14:19:08.372740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.628 [2024-07-26 14:19:08.372819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.628 [2024-07-26 14:19:08.372844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.628 [2024-07-26 14:19:08.372848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.628 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.887 [2024-07-26 14:19:08.618250] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.887 Malloc1 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.887 [2024-07-26 14:19:08.672829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2572768 00:24:51.887 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:24:51.888 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:51.888 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.418 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:24:54.418 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.418 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.418 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:24:54.418 "tick_rate": 2700000000, 00:24:54.419 "poll_groups": [ 00:24:54.419 { 00:24:54.419 "name": "nvmf_tgt_poll_group_000", 00:24:54.419 "admin_qpairs": 1, 00:24:54.419 "io_qpairs": 2, 00:24:54.419 "current_admin_qpairs": 1, 00:24:54.419 "current_io_qpairs": 2, 00:24:54.419 "pending_bdev_io": 0, 00:24:54.419 "completed_nvme_io": 23092, 00:24:54.419 "transports": [ 00:24:54.419 { 00:24:54.419 "trtype": "TCP" 00:24:54.419 } 00:24:54.419 ] 00:24:54.419 }, 00:24:54.419 { 00:24:54.419 "name": "nvmf_tgt_poll_group_001", 00:24:54.419 "admin_qpairs": 0, 00:24:54.419 "io_qpairs": 2, 00:24:54.419 "current_admin_qpairs": 0, 00:24:54.419 "current_io_qpairs": 2, 00:24:54.419 "pending_bdev_io": 0, 00:24:54.419 "completed_nvme_io": 23639, 00:24:54.419 "transports": [ 00:24:54.419 { 00:24:54.419 "trtype": "TCP" 00:24:54.419 } 00:24:54.419 ] 00:24:54.419 }, 00:24:54.419 { 00:24:54.419 "name": "nvmf_tgt_poll_group_002", 00:24:54.419 "admin_qpairs": 0, 00:24:54.419 "io_qpairs": 0, 00:24:54.419 "current_admin_qpairs": 0, 00:24:54.419 "current_io_qpairs": 0, 00:24:54.419 "pending_bdev_io": 0, 00:24:54.419 "completed_nvme_io": 0, 00:24:54.419 "transports": [ 00:24:54.419 { 00:24:54.419 "trtype": "TCP" 00:24:54.419 } 00:24:54.419 ] 00:24:54.419 }, 00:24:54.419 { 00:24:54.419 "name": "nvmf_tgt_poll_group_003", 00:24:54.419 "admin_qpairs": 0, 00:24:54.419 "io_qpairs": 0, 00:24:54.419 "current_admin_qpairs": 0, 00:24:54.419 "current_io_qpairs": 0, 00:24:54.419 "pending_bdev_io": 0, 00:24:54.419 "completed_nvme_io": 0, 00:24:54.419 "transports": [ 00:24:54.419 { 00:24:54.419 "trtype": "TCP" 00:24:54.419 } 00:24:54.419 ] 00:24:54.419 } 00:24:54.419 ] 00:24:54.419 }' 00:24:54.419 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:54.419 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:24:54.419 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:24:54.419 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:24:54.419 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2572768 00:25:02.528 Initializing NVMe Controllers 00:25:02.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:02.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:02.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:02.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:02.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:02.528 Initialization complete. Launching workers. 00:25:02.528 ======================================================== 00:25:02.528 Latency(us) 00:25:02.528 Device Information : IOPS MiB/s Average min max 00:25:02.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5980.70 23.36 10703.76 1778.42 54230.50 00:25:02.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6315.00 24.67 10137.58 1915.30 56511.70 00:25:02.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6090.50 23.79 10543.39 1845.97 55613.07 00:25:02.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6286.00 24.55 10184.46 2031.72 54857.83 00:25:02.528 ======================================================== 00:25:02.528 Total : 24672.19 96.38 10386.95 1778.42 56511.70 00:25:02.528 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:02.528 rmmod nvme_tcp 00:25:02.528 rmmod nvme_fabrics 00:25:02.528 rmmod nvme_keyring 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2572617 ']' 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2572617 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2572617 ']' 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2572617 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2572617 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2572617' 00:25:02.528 killing process with pid 2572617 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2572617 00:25:02.528 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2572617 00:25:02.528 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.528 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.528 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.528 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.528 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.528 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.528 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.528 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.060 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.060 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:25:05.060 00:25:05.060 real 0m45.597s 00:25:05.060 user 2m41.565s 00:25:05.060 sys 0m10.482s 00:25:05.060 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.060 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:05.060 ************************************ 00:25:05.060 END TEST nvmf_perf_adq 00:25:05.060 ************************************ 00:25:05.060 14:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:05.060 14:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.061 ************************************ 00:25:05.061 START TEST nvmf_shutdown 00:25:05.061 ************************************ 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:05.061 * Looking for test storage... 00:25:05.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:05.061 ************************************ 00:25:05.061 START TEST nvmf_shutdown_tc1 00:25:05.061 ************************************ 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.061 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.062 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.062 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.062 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.062 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.062 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.630 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:07.631 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:07.631 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:07.631 Found net devices under 0000:84:00.0: cvl_0_0 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:07.631 Found net devices under 0000:84:00.1: cvl_0_1 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:25:07.631 00:25:07.631 --- 10.0.0.2 ping statistics --- 00:25:07.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.631 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:25:07.631 00:25:07.631 --- 10.0.0.1 ping statistics --- 00:25:07.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.631 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2575936 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2575936 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2575936 ']' 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.631 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.632 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.632 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.632 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:07.632 [2024-07-26 14:19:24.500731] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:07.632 [2024-07-26 14:19:24.500825] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.891 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.891 [2024-07-26 14:19:24.584622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.891 [2024-07-26 14:19:24.726254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.891 [2024-07-26 14:19:24.726321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.891 [2024-07-26 14:19:24.726341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.891 [2024-07-26 14:19:24.726356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.891 [2024-07-26 14:19:24.726370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.891 [2024-07-26 14:19:24.726440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.891 [2024-07-26 14:19:24.726498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.891 [2024-07-26 14:19:24.726528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:07.891 [2024-07-26 14:19:24.726531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:08.149 [2024-07-26 14:19:24.914839] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.149 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.150 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:08.150 Malloc1 00:25:08.150 [2024-07-26 14:19:25.013735] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.408 Malloc2 00:25:08.408 Malloc3 00:25:08.408 Malloc4 00:25:08.408 Malloc5 00:25:08.408 Malloc6 00:25:08.670 Malloc7 00:25:08.670 Malloc8 00:25:08.670 Malloc9 00:25:08.670 Malloc10 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2576119 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2576119 /var/tmp/bdevperf.sock 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2576119 ']' 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.670 { 00:25:08.670 "params": { 00:25:08.670 "name": "Nvme$subsystem", 00:25:08.670 "trtype": "$TEST_TRANSPORT", 00:25:08.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.670 "adrfam": "ipv4", 00:25:08.670 "trsvcid": "$NVMF_PORT", 00:25:08.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.670 "hdgst": ${hdgst:-false}, 00:25:08.670 "ddgst": ${ddgst:-false} 00:25:08.670 }, 00:25:08.670 "method": "bdev_nvme_attach_controller" 00:25:08.670 } 00:25:08.670 EOF 00:25:08.670 )") 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.670 { 00:25:08.670 "params": { 00:25:08.670 "name": "Nvme$subsystem", 00:25:08.670 "trtype": "$TEST_TRANSPORT", 00:25:08.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.670 "adrfam": "ipv4", 00:25:08.670 "trsvcid": "$NVMF_PORT", 00:25:08.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.670 "hdgst": ${hdgst:-false}, 00:25:08.670 "ddgst": ${ddgst:-false} 00:25:08.670 }, 00:25:08.670 "method": "bdev_nvme_attach_controller" 00:25:08.670 } 00:25:08.670 EOF 00:25:08.670 )") 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.670 { 00:25:08.670 "params": { 00:25:08.670 "name": "Nvme$subsystem", 00:25:08.670 "trtype": "$TEST_TRANSPORT", 00:25:08.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.670 "adrfam": "ipv4", 00:25:08.670 "trsvcid": "$NVMF_PORT", 00:25:08.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.670 "hdgst": ${hdgst:-false}, 00:25:08.670 "ddgst": ${ddgst:-false} 00:25:08.670 }, 00:25:08.670 "method": "bdev_nvme_attach_controller" 00:25:08.670 } 00:25:08.670 EOF 00:25:08.670 )") 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.670 { 00:25:08.670 "params": { 00:25:08.670 "name": "Nvme$subsystem", 00:25:08.670 "trtype": "$TEST_TRANSPORT", 00:25:08.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.670 "adrfam": "ipv4", 00:25:08.670 "trsvcid": "$NVMF_PORT", 00:25:08.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.670 "hdgst": ${hdgst:-false}, 00:25:08.670 "ddgst": ${ddgst:-false} 00:25:08.670 }, 00:25:08.670 "method": "bdev_nvme_attach_controller" 00:25:08.670 } 00:25:08.670 EOF 00:25:08.670 )") 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.670 { 00:25:08.670 "params": { 00:25:08.670 "name": "Nvme$subsystem", 00:25:08.670 "trtype": "$TEST_TRANSPORT", 00:25:08.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.670 "adrfam": "ipv4", 00:25:08.670 "trsvcid": "$NVMF_PORT", 00:25:08.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.670 "hdgst": ${hdgst:-false}, 00:25:08.670 "ddgst": ${ddgst:-false} 00:25:08.670 }, 00:25:08.670 "method": "bdev_nvme_attach_controller" 00:25:08.670 } 00:25:08.670 EOF 00:25:08.670 )") 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.670 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.670 { 00:25:08.670 "params": { 00:25:08.670 "name": "Nvme$subsystem", 00:25:08.670 "trtype": "$TEST_TRANSPORT", 00:25:08.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.670 "adrfam": "ipv4", 00:25:08.670 "trsvcid": "$NVMF_PORT", 00:25:08.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.670 "hdgst": ${hdgst:-false}, 00:25:08.670 "ddgst": ${ddgst:-false} 00:25:08.670 }, 00:25:08.671 "method": "bdev_nvme_attach_controller" 00:25:08.671 } 00:25:08.671 EOF 00:25:08.671 )") 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.671 { 00:25:08.671 "params": { 00:25:08.671 "name": "Nvme$subsystem", 00:25:08.671 "trtype": "$TEST_TRANSPORT", 00:25:08.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.671 "adrfam": "ipv4", 00:25:08.671 "trsvcid": "$NVMF_PORT", 00:25:08.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.671 "hdgst": ${hdgst:-false}, 00:25:08.671 "ddgst": ${ddgst:-false} 00:25:08.671 }, 00:25:08.671 "method": "bdev_nvme_attach_controller" 00:25:08.671 } 00:25:08.671 EOF 00:25:08.671 )") 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.671 { 00:25:08.671 "params": { 00:25:08.671 "name": "Nvme$subsystem", 00:25:08.671 "trtype": "$TEST_TRANSPORT", 00:25:08.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.671 "adrfam": "ipv4", 00:25:08.671 "trsvcid": "$NVMF_PORT", 00:25:08.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.671 "hdgst": ${hdgst:-false}, 00:25:08.671 "ddgst": ${ddgst:-false} 00:25:08.671 }, 00:25:08.671 "method": "bdev_nvme_attach_controller" 00:25:08.671 } 00:25:08.671 EOF 00:25:08.671 )") 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.671 { 00:25:08.671 "params": { 00:25:08.671 "name": "Nvme$subsystem", 00:25:08.671 "trtype": "$TEST_TRANSPORT", 00:25:08.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.671 "adrfam": "ipv4", 00:25:08.671 "trsvcid": "$NVMF_PORT", 00:25:08.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.671 "hdgst": ${hdgst:-false}, 00:25:08.671 "ddgst": ${ddgst:-false} 00:25:08.671 }, 00:25:08.671 "method": "bdev_nvme_attach_controller" 00:25:08.671 } 00:25:08.671 EOF 00:25:08.671 )") 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.671 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.671 { 00:25:08.671 "params": { 00:25:08.671 "name": "Nvme$subsystem", 00:25:08.671 "trtype": "$TEST_TRANSPORT", 00:25:08.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.671 "adrfam": "ipv4", 00:25:08.671 "trsvcid": "$NVMF_PORT", 00:25:08.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.671 "hdgst": ${hdgst:-false}, 00:25:08.671 "ddgst": ${ddgst:-false} 00:25:08.671 }, 00:25:08.671 "method": "bdev_nvme_attach_controller" 00:25:08.671 } 00:25:08.671 EOF 00:25:08.671 )") 00:25:08.977 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:08.978 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:25:08.978 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:25:08.978 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme1", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme2", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme3", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme4", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme5", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme6", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme7", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme8", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme9", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 },{ 00:25:08.978 "params": { 00:25:08.978 "name": "Nvme10", 00:25:08.978 "trtype": "tcp", 00:25:08.978 "traddr": "10.0.0.2", 00:25:08.978 "adrfam": "ipv4", 00:25:08.978 "trsvcid": "4420", 00:25:08.978 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:08.978 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:08.978 "hdgst": false, 00:25:08.978 "ddgst": false 00:25:08.978 }, 00:25:08.978 "method": "bdev_nvme_attach_controller" 00:25:08.978 }' 00:25:08.978 [2024-07-26 14:19:25.568574] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:08.978 [2024-07-26 14:19:25.568663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:08.978 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.978 [2024-07-26 14:19:25.637163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.978 [2024-07-26 14:19:25.757899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2576119 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:10.877 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:25:11.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2576119 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2575936 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.811 { 00:25:11.811 "params": { 00:25:11.811 "name": "Nvme$subsystem", 00:25:11.811 "trtype": "$TEST_TRANSPORT", 00:25:11.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.811 "adrfam": "ipv4", 00:25:11.811 "trsvcid": "$NVMF_PORT", 00:25:11.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.811 "hdgst": ${hdgst:-false}, 00:25:11.811 "ddgst": ${ddgst:-false} 00:25:11.811 }, 00:25:11.811 "method": "bdev_nvme_attach_controller" 00:25:11.811 } 00:25:11.811 EOF 00:25:11.811 )") 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.811 { 00:25:11.811 "params": { 00:25:11.811 "name": "Nvme$subsystem", 00:25:11.811 "trtype": "$TEST_TRANSPORT", 00:25:11.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.811 "adrfam": "ipv4", 00:25:11.811 "trsvcid": "$NVMF_PORT", 00:25:11.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.811 "hdgst": ${hdgst:-false}, 00:25:11.811 "ddgst": ${ddgst:-false} 00:25:11.811 }, 00:25:11.811 "method": "bdev_nvme_attach_controller" 00:25:11.811 } 00:25:11.811 EOF 00:25:11.811 )") 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.811 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.811 { 00:25:11.811 "params": { 00:25:11.811 "name": "Nvme$subsystem", 00:25:11.811 "trtype": "$TEST_TRANSPORT", 00:25:11.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.811 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "$NVMF_PORT", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.812 "hdgst": ${hdgst:-false}, 00:25:11.812 "ddgst": ${ddgst:-false} 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 } 00:25:11.812 EOF 00:25:11.812 )") 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.812 { 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme$subsystem", 00:25:11.812 "trtype": "$TEST_TRANSPORT", 00:25:11.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "$NVMF_PORT", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.812 "hdgst": ${hdgst:-false}, 00:25:11.812 "ddgst": ${ddgst:-false} 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 } 00:25:11.812 EOF 00:25:11.812 )") 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.812 { 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme$subsystem", 00:25:11.812 "trtype": "$TEST_TRANSPORT", 00:25:11.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "$NVMF_PORT", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.812 "hdgst": ${hdgst:-false}, 00:25:11.812 "ddgst": ${ddgst:-false} 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 } 00:25:11.812 EOF 00:25:11.812 )") 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.812 { 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme$subsystem", 00:25:11.812 "trtype": "$TEST_TRANSPORT", 00:25:11.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "$NVMF_PORT", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.812 "hdgst": ${hdgst:-false}, 00:25:11.812 "ddgst": ${ddgst:-false} 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 } 00:25:11.812 EOF 00:25:11.812 )") 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.812 { 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme$subsystem", 00:25:11.812 "trtype": "$TEST_TRANSPORT", 00:25:11.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "$NVMF_PORT", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.812 "hdgst": ${hdgst:-false}, 00:25:11.812 "ddgst": ${ddgst:-false} 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 } 00:25:11.812 EOF 00:25:11.812 )") 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.812 { 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme$subsystem", 00:25:11.812 "trtype": "$TEST_TRANSPORT", 00:25:11.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "$NVMF_PORT", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.812 "hdgst": ${hdgst:-false}, 00:25:11.812 "ddgst": ${ddgst:-false} 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 } 00:25:11.812 EOF 00:25:11.812 )") 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.812 { 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme$subsystem", 00:25:11.812 "trtype": "$TEST_TRANSPORT", 00:25:11.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "$NVMF_PORT", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.812 "hdgst": ${hdgst:-false}, 00:25:11.812 "ddgst": ${ddgst:-false} 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 } 00:25:11.812 EOF 00:25:11.812 )") 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.812 { 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme$subsystem", 00:25:11.812 "trtype": "$TEST_TRANSPORT", 00:25:11.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "$NVMF_PORT", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.812 "hdgst": ${hdgst:-false}, 00:25:11.812 "ddgst": ${ddgst:-false} 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 } 00:25:11.812 EOF 00:25:11.812 )") 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:25:11.812 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme1", 00:25:11.812 "trtype": "tcp", 00:25:11.812 "traddr": "10.0.0.2", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "4420", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.812 "hdgst": false, 00:25:11.812 "ddgst": false 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 },{ 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme2", 00:25:11.812 "trtype": "tcp", 00:25:11.812 "traddr": "10.0.0.2", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "4420", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:11.812 "hdgst": false, 00:25:11.812 "ddgst": false 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 },{ 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme3", 00:25:11.812 "trtype": "tcp", 00:25:11.812 "traddr": "10.0.0.2", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "4420", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:11.812 "hdgst": false, 00:25:11.812 "ddgst": false 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 },{ 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme4", 00:25:11.812 "trtype": "tcp", 00:25:11.812 "traddr": "10.0.0.2", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "4420", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:11.812 "hdgst": false, 00:25:11.812 "ddgst": false 00:25:11.812 }, 00:25:11.812 "method": "bdev_nvme_attach_controller" 00:25:11.812 },{ 00:25:11.812 "params": { 00:25:11.812 "name": "Nvme5", 00:25:11.812 "trtype": "tcp", 00:25:11.812 "traddr": "10.0.0.2", 00:25:11.812 "adrfam": "ipv4", 00:25:11.812 "trsvcid": "4420", 00:25:11.812 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:11.812 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:11.812 "hdgst": false, 00:25:11.812 "ddgst": false 00:25:11.812 }, 00:25:11.813 "method": "bdev_nvme_attach_controller" 00:25:11.813 },{ 00:25:11.813 "params": { 00:25:11.813 "name": "Nvme6", 00:25:11.813 "trtype": "tcp", 00:25:11.813 "traddr": "10.0.0.2", 00:25:11.813 "adrfam": "ipv4", 00:25:11.813 "trsvcid": "4420", 00:25:11.813 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:11.813 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:11.813 "hdgst": false, 00:25:11.813 "ddgst": false 00:25:11.813 }, 00:25:11.813 "method": "bdev_nvme_attach_controller" 00:25:11.813 },{ 00:25:11.813 "params": { 00:25:11.813 "name": "Nvme7", 00:25:11.813 "trtype": "tcp", 00:25:11.813 "traddr": "10.0.0.2", 00:25:11.813 "adrfam": "ipv4", 00:25:11.813 "trsvcid": "4420", 00:25:11.813 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:11.813 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:11.813 "hdgst": false, 00:25:11.813 "ddgst": false 00:25:11.813 }, 00:25:11.813 "method": "bdev_nvme_attach_controller" 00:25:11.813 },{ 00:25:11.813 "params": { 00:25:11.813 "name": "Nvme8", 00:25:11.813 "trtype": "tcp", 00:25:11.813 "traddr": "10.0.0.2", 00:25:11.813 "adrfam": "ipv4", 00:25:11.813 "trsvcid": "4420", 00:25:11.813 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:11.813 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:11.813 "hdgst": false, 00:25:11.813 "ddgst": false 00:25:11.813 }, 00:25:11.813 "method": "bdev_nvme_attach_controller" 00:25:11.813 },{ 00:25:11.813 "params": { 00:25:11.813 "name": "Nvme9", 00:25:11.813 "trtype": "tcp", 00:25:11.813 "traddr": "10.0.0.2", 00:25:11.813 "adrfam": "ipv4", 00:25:11.813 "trsvcid": "4420", 00:25:11.813 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:11.813 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:11.813 "hdgst": false, 00:25:11.813 "ddgst": false 00:25:11.813 }, 00:25:11.813 "method": "bdev_nvme_attach_controller" 00:25:11.813 },{ 00:25:11.813 "params": { 00:25:11.813 "name": "Nvme10", 00:25:11.813 "trtype": "tcp", 00:25:11.813 "traddr": "10.0.0.2", 00:25:11.813 "adrfam": "ipv4", 00:25:11.813 "trsvcid": "4420", 00:25:11.813 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:11.813 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:11.813 "hdgst": false, 00:25:11.813 "ddgst": false 00:25:11.813 }, 00:25:11.813 "method": "bdev_nvme_attach_controller" 00:25:11.813 }' 00:25:11.813 [2024-07-26 14:19:28.459947] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:11.813 [2024-07-26 14:19:28.460043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576531 ] 00:25:11.813 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.813 [2024-07-26 14:19:28.542643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.813 [2024-07-26 14:19:28.667950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.712 Running I/O for 1 seconds... 00:25:14.646 00:25:14.646 Latency(us) 00:25:14.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.646 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme1n1 : 1.08 178.56 11.16 0.00 0.00 354481.75 24466.77 284280.60 00:25:14.646 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme2n1 : 1.18 216.64 13.54 0.00 0.00 284427.00 18641.35 243891.01 00:25:14.646 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme3n1 : 1.19 219.19 13.70 0.00 0.00 276510.20 11165.39 279620.27 00:25:14.646 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme4n1 : 1.19 215.31 13.46 0.00 0.00 278775.28 37865.24 304475.40 00:25:14.646 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme5n1 : 1.19 214.54 13.41 0.00 0.00 274906.07 20971.52 281173.71 00:25:14.646 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme6n1 : 1.21 211.36 13.21 0.00 0.00 274577.26 23398.78 282727.16 00:25:14.646 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme7n1 : 1.21 212.28 13.27 0.00 0.00 268123.02 22816.24 284280.60 00:25:14.646 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme8n1 : 1.18 217.57 13.60 0.00 0.00 255428.84 29127.11 268746.15 00:25:14.646 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme9n1 : 1.22 210.62 13.16 0.00 0.00 260623.74 22816.24 295154.73 00:25:14.646 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.646 Verification LBA range: start 0x0 length 0x400 00:25:14.646 Nvme10n1 : 1.22 209.88 13.12 0.00 0.00 256929.94 21554.06 315349.52 00:25:14.646 =================================================================================================================== 00:25:14.646 Total : 2105.95 131.62 0.00 0.00 276529.47 11165.39 315349.52 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:14.905 rmmod nvme_tcp 00:25:14.905 rmmod nvme_fabrics 00:25:14.905 rmmod nvme_keyring 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2575936 ']' 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2575936 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2575936 ']' 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2575936 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:14.905 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2575936 00:25:15.163 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:15.163 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:15.163 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2575936' 00:25:15.163 killing process with pid 2575936 00:25:15.163 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2575936 00:25:15.163 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2575936 00:25:15.731 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:15.731 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:15.731 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:15.731 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.731 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:15.731 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.731 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.731 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.264 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:18.264 00:25:18.264 real 0m12.989s 00:25:18.264 user 0m35.797s 00:25:18.264 sys 0m3.921s 00:25:18.264 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:18.264 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:18.264 ************************************ 00:25:18.264 END TEST nvmf_shutdown_tc1 00:25:18.264 ************************************ 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:18.265 ************************************ 00:25:18.265 START TEST nvmf_shutdown_tc2 00:25:18.265 ************************************ 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:18.265 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:18.265 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:18.265 Found net devices under 0000:84:00.0: cvl_0_0 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:18.265 Found net devices under 0000:84:00.1: cvl_0_1 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.265 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:25:18.266 00:25:18.266 --- 10.0.0.2 ping statistics --- 00:25:18.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.266 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:25:18.266 00:25:18.266 --- 10.0.0.1 ping statistics --- 00:25:18.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.266 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2577300 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2577300 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2577300 ']' 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:18.266 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.266 [2024-07-26 14:19:34.840368] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:18.266 [2024-07-26 14:19:34.840483] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.266 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.266 [2024-07-26 14:19:34.924554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:18.266 [2024-07-26 14:19:35.065894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.266 [2024-07-26 14:19:35.065958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.266 [2024-07-26 14:19:35.065979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.266 [2024-07-26 14:19:35.065996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.266 [2024-07-26 14:19:35.066010] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.266 [2024-07-26 14:19:35.066105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.266 [2024-07-26 14:19:35.066164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.266 [2024-07-26 14:19:35.066221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:18.266 [2024-07-26 14:19:35.066225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.525 [2024-07-26 14:19:35.256785] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.525 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.525 Malloc1 00:25:18.525 [2024-07-26 14:19:35.347261] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.525 Malloc2 00:25:18.784 Malloc3 00:25:18.784 Malloc4 00:25:18.784 Malloc5 00:25:18.784 Malloc6 00:25:18.784 Malloc7 00:25:19.043 Malloc8 00:25:19.043 Malloc9 00:25:19.043 Malloc10 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2577475 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2577475 /var/tmp/bdevperf.sock 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2577475 ']' 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:19.043 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.043 { 00:25:19.043 "params": { 00:25:19.043 "name": "Nvme$subsystem", 00:25:19.043 "trtype": "$TEST_TRANSPORT", 00:25:19.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.043 "adrfam": "ipv4", 00:25:19.043 "trsvcid": "$NVMF_PORT", 00:25:19.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.043 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:19.044 { 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme$subsystem", 00:25:19.044 "trtype": "$TEST_TRANSPORT", 00:25:19.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "$NVMF_PORT", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.044 "hdgst": ${hdgst:-false}, 00:25:19.044 "ddgst": ${ddgst:-false} 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 } 00:25:19.044 EOF 00:25:19.044 )") 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:25:19.044 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme1", 00:25:19.044 "trtype": "tcp", 00:25:19.044 "traddr": "10.0.0.2", 00:25:19.044 "adrfam": "ipv4", 00:25:19.044 "trsvcid": "4420", 00:25:19.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.044 "hdgst": false, 00:25:19.044 "ddgst": false 00:25:19.044 }, 00:25:19.044 "method": "bdev_nvme_attach_controller" 00:25:19.044 },{ 00:25:19.044 "params": { 00:25:19.044 "name": "Nvme2", 00:25:19.044 "trtype": "tcp", 00:25:19.044 "traddr": "10.0.0.2", 00:25:19.044 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 },{ 00:25:19.045 "params": { 00:25:19.045 "name": "Nvme3", 00:25:19.045 "trtype": "tcp", 00:25:19.045 "traddr": "10.0.0.2", 00:25:19.045 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 },{ 00:25:19.045 "params": { 00:25:19.045 "name": "Nvme4", 00:25:19.045 "trtype": "tcp", 00:25:19.045 "traddr": "10.0.0.2", 00:25:19.045 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 },{ 00:25:19.045 "params": { 00:25:19.045 "name": "Nvme5", 00:25:19.045 "trtype": "tcp", 00:25:19.045 "traddr": "10.0.0.2", 00:25:19.045 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 },{ 00:25:19.045 "params": { 00:25:19.045 "name": "Nvme6", 00:25:19.045 "trtype": "tcp", 00:25:19.045 "traddr": "10.0.0.2", 00:25:19.045 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 },{ 00:25:19.045 "params": { 00:25:19.045 "name": "Nvme7", 00:25:19.045 "trtype": "tcp", 00:25:19.045 "traddr": "10.0.0.2", 00:25:19.045 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 },{ 00:25:19.045 "params": { 00:25:19.045 "name": "Nvme8", 00:25:19.045 "trtype": "tcp", 00:25:19.045 "traddr": "10.0.0.2", 00:25:19.045 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 },{ 00:25:19.045 "params": { 00:25:19.045 "name": "Nvme9", 00:25:19.045 "trtype": "tcp", 00:25:19.045 "traddr": "10.0.0.2", 00:25:19.045 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 },{ 00:25:19.045 "params": { 00:25:19.045 "name": "Nvme10", 00:25:19.045 "trtype": "tcp", 00:25:19.045 "traddr": "10.0.0.2", 00:25:19.045 "adrfam": "ipv4", 00:25:19.045 "trsvcid": "4420", 00:25:19.045 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:19.045 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:19.045 "hdgst": false, 00:25:19.045 "ddgst": false 00:25:19.045 }, 00:25:19.045 "method": "bdev_nvme_attach_controller" 00:25:19.045 }' 00:25:19.045 [2024-07-26 14:19:35.914401] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:19.045 [2024-07-26 14:19:35.914525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577475 ] 00:25:19.304 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.304 [2024-07-26 14:19:35.999701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.304 [2024-07-26 14:19:36.124539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.203 Running I/O for 10 seconds... 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:21.461 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.719 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:21.719 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:21.719 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:21.977 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:21.977 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:21.977 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:21.977 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2577475 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2577475 ']' 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2577475 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2577475 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2577475' 00:25:21.978 killing process with pid 2577475 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2577475 00:25:21.978 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2577475 00:25:21.978 Received shutdown signal, test time was about 1.048324 seconds 00:25:21.978 00:25:21.978 Latency(us) 00:25:21.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.978 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme1n1 : 1.03 186.06 11.63 0.00 0.00 339932.35 23787.14 307582.29 00:25:21.978 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme2n1 : 1.04 245.87 15.37 0.00 0.00 252147.48 20583.16 288940.94 00:25:21.978 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme3n1 : 1.04 246.68 15.42 0.00 0.00 246093.75 32234.00 271853.04 00:25:21.978 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme4n1 : 1.00 191.07 11.94 0.00 0.00 310900.56 47380.10 306028.85 00:25:21.978 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme5n1 : 1.05 244.41 15.28 0.00 0.00 238745.98 21262.79 285834.05 00:25:21.978 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme6n1 : 1.01 190.53 11.91 0.00 0.00 298487.66 22719.15 287387.50 00:25:21.978 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme7n1 : 1.00 192.77 12.05 0.00 0.00 287597.10 20486.07 265639.25 00:25:21.978 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme8n1 : 1.01 204.15 12.76 0.00 0.00 260089.08 18932.62 276513.37 00:25:21.978 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme9n1 : 1.02 187.68 11.73 0.00 0.00 283679.86 22039.51 290494.39 00:25:21.978 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:21.978 Verification LBA range: start 0x0 length 0x400 00:25:21.978 Nvme10n1 : 1.03 186.62 11.66 0.00 0.00 279183.61 22816.24 320009.86 00:25:21.978 =================================================================================================================== 00:25:21.978 Total : 2075.83 129.74 0.00 0.00 276484.03 18932.62 320009.86 00:25:22.236 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2577300 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:23.612 rmmod nvme_tcp 00:25:23.612 rmmod nvme_fabrics 00:25:23.612 rmmod nvme_keyring 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2577300 ']' 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2577300 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2577300 ']' 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2577300 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2577300 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2577300' 00:25:23.612 killing process with pid 2577300 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2577300 00:25:23.612 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2577300 00:25:24.179 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:24.179 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:24.179 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:24.179 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.179 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:24.179 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.179 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.179 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.085 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:26.085 00:25:26.085 real 0m8.349s 00:25:26.085 user 0m26.007s 00:25:26.085 sys 0m1.716s 00:25:26.085 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:26.085 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:26.085 ************************************ 00:25:26.085 END TEST nvmf_shutdown_tc2 00:25:26.085 ************************************ 00:25:26.085 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:26.085 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:26.085 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:26.085 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:26.346 ************************************ 00:25:26.346 START TEST nvmf_shutdown_tc3 00:25:26.346 ************************************ 00:25:26.346 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:25:26.346 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:26.346 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:26.346 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.346 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:26.347 Found net devices under 0000:84:00.0: cvl_0_0 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:26.347 Found net devices under 0000:84:00.1: cvl_0_1 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:25:26.347 00:25:26.347 --- 10.0.0.2 ping statistics --- 00:25:26.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.347 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:25:26.347 00:25:26.347 --- 10.0.0.1 ping statistics --- 00:25:26.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.347 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2578397 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2578397 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2578397 ']' 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.347 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:26.606 [2024-07-26 14:19:43.273209] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:26.606 [2024-07-26 14:19:43.273308] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.606 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.606 [2024-07-26 14:19:43.357497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.881 [2024-07-26 14:19:43.499531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.881 [2024-07-26 14:19:43.499593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.881 [2024-07-26 14:19:43.499611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.881 [2024-07-26 14:19:43.499627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.881 [2024-07-26 14:19:43.499641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.881 [2024-07-26 14:19:43.499751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.881 [2024-07-26 14:19:43.499786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.881 [2024-07-26 14:19:43.499870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:26.881 [2024-07-26 14:19:43.499873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:26.881 [2024-07-26 14:19:43.690768] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:26.881 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.882 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:27.155 Malloc1 00:25:27.155 [2024-07-26 14:19:43.794163] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.155 Malloc2 00:25:27.155 Malloc3 00:25:27.155 Malloc4 00:25:27.155 Malloc5 00:25:27.155 Malloc6 00:25:27.414 Malloc7 00:25:27.414 Malloc8 00:25:27.414 Malloc9 00:25:27.414 Malloc10 00:25:27.414 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.414 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:27.414 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:27.414 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2578581 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2578581 /var/tmp/bdevperf.sock 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2578581 ']' 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.673 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.673 { 00:25:27.673 "params": { 00:25:27.673 "name": "Nvme$subsystem", 00:25:27.673 "trtype": "$TEST_TRANSPORT", 00:25:27.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.673 "adrfam": "ipv4", 00:25:27.673 "trsvcid": "$NVMF_PORT", 00:25:27.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.674 "hdgst": ${hdgst:-false}, 00:25:27.674 "ddgst": ${ddgst:-false} 00:25:27.674 }, 00:25:27.674 "method": "bdev_nvme_attach_controller" 00:25:27.674 } 00:25:27.674 EOF 00:25:27.674 )") 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.674 { 00:25:27.674 "params": { 00:25:27.674 "name": "Nvme$subsystem", 00:25:27.674 "trtype": "$TEST_TRANSPORT", 00:25:27.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.674 "adrfam": "ipv4", 00:25:27.674 "trsvcid": "$NVMF_PORT", 00:25:27.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.674 "hdgst": ${hdgst:-false}, 00:25:27.674 "ddgst": ${ddgst:-false} 00:25:27.674 }, 00:25:27.674 "method": "bdev_nvme_attach_controller" 00:25:27.674 } 00:25:27.674 EOF 00:25:27.674 )") 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.674 { 00:25:27.674 "params": { 00:25:27.674 "name": "Nvme$subsystem", 00:25:27.674 "trtype": "$TEST_TRANSPORT", 00:25:27.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.674 "adrfam": "ipv4", 00:25:27.674 "trsvcid": "$NVMF_PORT", 00:25:27.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.674 "hdgst": ${hdgst:-false}, 00:25:27.674 "ddgst": ${ddgst:-false} 00:25:27.674 }, 00:25:27.674 "method": "bdev_nvme_attach_controller" 00:25:27.674 } 00:25:27.674 EOF 00:25:27.674 )") 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.674 { 00:25:27.674 "params": { 00:25:27.674 "name": "Nvme$subsystem", 00:25:27.674 "trtype": "$TEST_TRANSPORT", 00:25:27.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.674 "adrfam": "ipv4", 00:25:27.674 "trsvcid": "$NVMF_PORT", 00:25:27.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.674 "hdgst": ${hdgst:-false}, 00:25:27.674 "ddgst": ${ddgst:-false} 00:25:27.674 }, 00:25:27.674 "method": "bdev_nvme_attach_controller" 00:25:27.674 } 00:25:27.674 EOF 00:25:27.674 )") 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.674 { 00:25:27.674 "params": { 00:25:27.674 "name": "Nvme$subsystem", 00:25:27.674 "trtype": "$TEST_TRANSPORT", 00:25:27.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.674 "adrfam": "ipv4", 00:25:27.674 "trsvcid": "$NVMF_PORT", 00:25:27.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.674 "hdgst": ${hdgst:-false}, 00:25:27.674 "ddgst": ${ddgst:-false} 00:25:27.674 }, 00:25:27.674 "method": "bdev_nvme_attach_controller" 00:25:27.674 } 00:25:27.674 EOF 00:25:27.674 )") 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.674 { 00:25:27.674 "params": { 00:25:27.674 "name": "Nvme$subsystem", 00:25:27.674 "trtype": "$TEST_TRANSPORT", 00:25:27.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.674 "adrfam": "ipv4", 00:25:27.674 "trsvcid": "$NVMF_PORT", 00:25:27.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.674 "hdgst": ${hdgst:-false}, 00:25:27.674 "ddgst": ${ddgst:-false} 00:25:27.674 }, 00:25:27.674 "method": "bdev_nvme_attach_controller" 00:25:27.674 } 00:25:27.674 EOF 00:25:27.674 )") 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.674 { 00:25:27.674 "params": { 00:25:27.674 "name": "Nvme$subsystem", 00:25:27.674 "trtype": "$TEST_TRANSPORT", 00:25:27.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.674 "adrfam": "ipv4", 00:25:27.674 "trsvcid": "$NVMF_PORT", 00:25:27.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.674 "hdgst": ${hdgst:-false}, 00:25:27.674 "ddgst": ${ddgst:-false} 00:25:27.674 }, 00:25:27.674 "method": "bdev_nvme_attach_controller" 00:25:27.674 } 00:25:27.674 EOF 00:25:27.674 )") 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.674 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.675 { 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme$subsystem", 00:25:27.675 "trtype": "$TEST_TRANSPORT", 00:25:27.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "$NVMF_PORT", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.675 "hdgst": ${hdgst:-false}, 00:25:27.675 "ddgst": ${ddgst:-false} 00:25:27.675 }, 00:25:27.675 "method": "bdev_nvme_attach_controller" 00:25:27.675 } 00:25:27.675 EOF 00:25:27.675 )") 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.675 { 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme$subsystem", 00:25:27.675 "trtype": "$TEST_TRANSPORT", 00:25:27.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "$NVMF_PORT", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.675 "hdgst": ${hdgst:-false}, 00:25:27.675 "ddgst": ${ddgst:-false} 00:25:27.675 }, 00:25:27.675 "method": "bdev_nvme_attach_controller" 00:25:27.675 } 00:25:27.675 EOF 00:25:27.675 )") 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.675 { 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme$subsystem", 00:25:27.675 "trtype": "$TEST_TRANSPORT", 00:25:27.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "$NVMF_PORT", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.675 "hdgst": ${hdgst:-false}, 00:25:27.675 "ddgst": ${ddgst:-false} 00:25:27.675 }, 00:25:27.675 "method": "bdev_nvme_attach_controller" 00:25:27.675 } 00:25:27.675 EOF 00:25:27.675 )") 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:25:27.675 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme1", 00:25:27.675 "trtype": "tcp", 00:25:27.675 "traddr": "10.0.0.2", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "4420", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:27.675 "hdgst": false, 00:25:27.675 "ddgst": false 00:25:27.675 }, 00:25:27.675 "method": "bdev_nvme_attach_controller" 00:25:27.675 },{ 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme2", 00:25:27.675 "trtype": "tcp", 00:25:27.675 "traddr": "10.0.0.2", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "4420", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:27.675 "hdgst": false, 00:25:27.675 "ddgst": false 00:25:27.675 }, 00:25:27.675 "method": "bdev_nvme_attach_controller" 00:25:27.675 },{ 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme3", 00:25:27.675 "trtype": "tcp", 00:25:27.675 "traddr": "10.0.0.2", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "4420", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:27.675 "hdgst": false, 00:25:27.675 "ddgst": false 00:25:27.675 }, 00:25:27.675 "method": "bdev_nvme_attach_controller" 00:25:27.675 },{ 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme4", 00:25:27.675 "trtype": "tcp", 00:25:27.675 "traddr": "10.0.0.2", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "4420", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:27.675 "hdgst": false, 00:25:27.675 "ddgst": false 00:25:27.675 }, 00:25:27.675 "method": "bdev_nvme_attach_controller" 00:25:27.675 },{ 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme5", 00:25:27.675 "trtype": "tcp", 00:25:27.675 "traddr": "10.0.0.2", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "4420", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:27.675 "hdgst": false, 00:25:27.675 "ddgst": false 00:25:27.675 }, 00:25:27.675 "method": "bdev_nvme_attach_controller" 00:25:27.675 },{ 00:25:27.675 "params": { 00:25:27.675 "name": "Nvme6", 00:25:27.675 "trtype": "tcp", 00:25:27.675 "traddr": "10.0.0.2", 00:25:27.675 "adrfam": "ipv4", 00:25:27.675 "trsvcid": "4420", 00:25:27.675 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:27.675 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:27.675 "hdgst": false, 00:25:27.675 "ddgst": false 00:25:27.675 }, 00:25:27.676 "method": "bdev_nvme_attach_controller" 00:25:27.676 },{ 00:25:27.676 "params": { 00:25:27.676 "name": "Nvme7", 00:25:27.676 "trtype": "tcp", 00:25:27.676 "traddr": "10.0.0.2", 00:25:27.676 "adrfam": "ipv4", 00:25:27.676 "trsvcid": "4420", 00:25:27.676 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:27.676 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:27.676 "hdgst": false, 00:25:27.676 "ddgst": false 00:25:27.676 }, 00:25:27.676 "method": "bdev_nvme_attach_controller" 00:25:27.676 },{ 00:25:27.676 "params": { 00:25:27.676 "name": "Nvme8", 00:25:27.676 "trtype": "tcp", 00:25:27.676 "traddr": "10.0.0.2", 00:25:27.676 "adrfam": "ipv4", 00:25:27.676 "trsvcid": "4420", 00:25:27.676 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:27.676 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:27.676 "hdgst": false, 00:25:27.676 "ddgst": false 00:25:27.676 }, 00:25:27.676 "method": "bdev_nvme_attach_controller" 00:25:27.676 },{ 00:25:27.676 "params": { 00:25:27.676 "name": "Nvme9", 00:25:27.676 "trtype": "tcp", 00:25:27.676 "traddr": "10.0.0.2", 00:25:27.676 "adrfam": "ipv4", 00:25:27.676 "trsvcid": "4420", 00:25:27.676 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:27.676 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:27.676 "hdgst": false, 00:25:27.676 "ddgst": false 00:25:27.676 }, 00:25:27.676 "method": "bdev_nvme_attach_controller" 00:25:27.676 },{ 00:25:27.676 "params": { 00:25:27.676 "name": "Nvme10", 00:25:27.676 "trtype": "tcp", 00:25:27.676 "traddr": "10.0.0.2", 00:25:27.676 "adrfam": "ipv4", 00:25:27.676 "trsvcid": "4420", 00:25:27.676 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:27.676 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:27.676 "hdgst": false, 00:25:27.676 "ddgst": false 00:25:27.676 }, 00:25:27.676 "method": "bdev_nvme_attach_controller" 00:25:27.676 }' 00:25:27.676 [2024-07-26 14:19:44.364152] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:27.676 [2024-07-26 14:19:44.364250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578581 ] 00:25:27.676 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.676 [2024-07-26 14:19:44.440860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.934 [2024-07-26 14:19:44.566900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.833 Running I/O for 10 seconds... 00:25:29.833 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:29.833 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:29.833 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:29.833 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.833 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:30.092 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=71 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 71 -ge 100 ']' 00:25:30.350 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:30.607 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:30.607 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:30.607 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:30.607 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:30.607 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.607 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=135 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2578397 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2578397 ']' 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2578397 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2578397 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2578397' 00:25:30.881 killing process with pid 2578397 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2578397 00:25:30.881 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2578397 00:25:30.881 [2024-07-26 14:19:47.576604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.881 [2024-07-26 14:19:47.576945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.576959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.576973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.576998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.577587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd700 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.580993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.882 [2024-07-26 14:19:47.581184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.581624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdbc0 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.583438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe080 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.583494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe080 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.583512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe080 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.585994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.883 [2024-07-26 14:19:47.586188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.586395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfea20 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.587990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.884 [2024-07-26 14:19:47.588535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.588549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.588562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.588576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.588589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.588603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.588616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.588630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c818d0 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.590999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.591012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.591025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.591039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.591051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.591065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.591078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.591091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.591108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfed90 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.885 [2024-07-26 14:19:47.592511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.592988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.593229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff270 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.594093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.594120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.594135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.594149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.594163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.594177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.594191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.886 [2024-07-26 14:19:47.594204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.594943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff5e0 is same with the state(5) to be set 00:25:30.887 [2024-07-26 14:19:47.596957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.887 [2024-07-26 14:19:47.597478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.887 [2024-07-26 14:19:47.597495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.597980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.597998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.888 [2024-07-26 14:19:47.598800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.888 [2024-07-26 14:19:47.598816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.598833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.598849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.598866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.598882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.598899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.598915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.598933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.598949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.598966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.598981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.598999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.889 [2024-07-26 14:19:47.599279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.599340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:30.889 [2024-07-26 14:19:47.599943] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x289f1b0 was disconnected and freed. reset controller. 00:25:30.889 [2024-07-26 14:19:47.600076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d3120 is same with the state(5) to be set 00:25:30.889 [2024-07-26 14:19:47.600277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a72240 is same with the state(5) to be set 00:25:30.889 [2024-07-26 14:19:47.600463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a6c220 is same with the state(5) to be set 00:25:30.889 [2024-07-26 14:19:47.600641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a61390 is same with the state(5) to be set 00:25:30.889 [2024-07-26 14:19:47.600847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.889 [2024-07-26 14:19:47.600946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.889 [2024-07-26 14:19:47.600961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.600975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d6420 is same with the state(5) to be set 00:25:30.890 [2024-07-26 14:19:47.601025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a72990 is same with the state(5) to be set 00:25:30.890 [2024-07-26 14:19:47.601205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2494200 is same with the state(5) to be set 00:25:30.890 [2024-07-26 14:19:47.601395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d2ba0 is same with the state(5) to be set 00:25:30.890 [2024-07-26 14:19:47.601583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a8610 is same with the state(5) to be set 00:25:30.890 [2024-07-26 14:19:47.601762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.890 [2024-07-26 14:19:47.601887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.601901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a405a0 is same with the state(5) to be set 00:25:30.890 [2024-07-26 14:19:47.602548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.602979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.602996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.890 [2024-07-26 14:19:47.603013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.890 [2024-07-26 14:19:47.603030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.603973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.603988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.891 [2024-07-26 14:19:47.604005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.891 [2024-07-26 14:19:47.604021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.604038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.604054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.604071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.604086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.892 [2024-07-26 14:19:47.617912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.892 [2024-07-26 14:19:47.617928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.893 [2024-07-26 14:19:47.617946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.893 [2024-07-26 14:19:47.617962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.893 [2024-07-26 14:19:47.617980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.893 [2024-07-26 14:19:47.617997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.893 [2024-07-26 14:19:47.618083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:30.893 [2024-07-26 14:19:47.618197] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2a26530 was disconnected and freed. reset controller. 00:25:30.893 [2024-07-26 14:19:47.618306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.893 [2024-07-26 14:19:47.618329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.893 [2024-07-26 14:19:47.618360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.894 [2024-07-26 14:19:47.618378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.894 [2024-07-26 14:19:47.618396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.894 [2024-07-26 14:19:47.618413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.894 [2024-07-26 14:19:47.618456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.894 [2024-07-26 14:19:47.618476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.894 [2024-07-26 14:19:47.618494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.894 [2024-07-26 14:19:47.618510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.894 [2024-07-26 14:19:47.618528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.894 [2024-07-26 14:19:47.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.894 [2024-07-26 14:19:47.618561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.618976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.618992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.619010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.619030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.895 [2024-07-26 14:19:47.619048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.895 [2024-07-26 14:19:47.619065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.619978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.619996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.896 [2024-07-26 14:19:47.620435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.896 [2024-07-26 14:19:47.620456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.620478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.620495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.620511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.620528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.620544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.620562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.620677] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2a27ab0 was disconnected and freed. reset controller. 00:25:30.897 [2024-07-26 14:19:47.621171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.621972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.621989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.897 [2024-07-26 14:19:47.622447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.897 [2024-07-26 14:19:47.622476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.622973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.622991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.898 [2024-07-26 14:19:47.623481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.898 [2024-07-26 14:19:47.623599] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x289dc70 was disconnected and freed. reset controller. 00:25:30.898 [2024-07-26 14:19:47.624976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d3120 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a72240 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a6c220 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a61390 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d6420 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a72990 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2494200 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d2ba0 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a8610 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.625267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a405a0 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.629516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:30.898 [2024-07-26 14:19:47.630399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:30.898 [2024-07-26 14:19:47.630451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:30.898 [2024-07-26 14:19:47.630475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:30.898 [2024-07-26 14:19:47.630851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-26 14:19:47.630905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a405a0 with addr=10.0.0.2, port=4420 00:25:30.898 [2024-07-26 14:19:47.630926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a405a0 is same with the state(5) to be set 00:25:30.898 [2024-07-26 14:19:47.631781] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.898 [2024-07-26 14:19:47.631885] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.898 [2024-07-26 14:19:47.631973] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.898 [2024-07-26 14:19:47.632052] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.898 [2024-07-26 14:19:47.632736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-26 14:19:47.632786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28d3120 with addr=10.0.0.2, port=4420 00:25:30.898 [2024-07-26 14:19:47.632805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d3120 is same with the state(5) to be set 00:25:30.898 [2024-07-26 14:19:47.633043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-26 14:19:47.633089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28d6420 with addr=10.0.0.2, port=4420 00:25:30.898 [2024-07-26 14:19:47.633106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d6420 is same with the state(5) to be set 00:25:30.898 [2024-07-26 14:19:47.633324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.898 [2024-07-26 14:19:47.633351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a61390 with addr=10.0.0.2, port=4420 00:25:30.898 [2024-07-26 14:19:47.633368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a61390 is same with the state(5) to be set 00:25:30.898 [2024-07-26 14:19:47.633395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a405a0 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.633462] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.898 [2024-07-26 14:19:47.633554] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.898 [2024-07-26 14:19:47.633709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d3120 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.633745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d6420 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.633766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a61390 (9): Bad file descriptor 00:25:30.898 [2024-07-26 14:19:47.633796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:30.898 [2024-07-26 14:19:47.633811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:30.898 [2024-07-26 14:19:47.633839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:30.898 [2024-07-26 14:19:47.633981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.898 [2024-07-26 14:19:47.634007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:30.898 [2024-07-26 14:19:47.634022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:30.899 [2024-07-26 14:19:47.634037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:30.899 [2024-07-26 14:19:47.634058] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:30.899 [2024-07-26 14:19:47.634077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:30.899 [2024-07-26 14:19:47.634091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:30.899 [2024-07-26 14:19:47.634110] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:30.899 [2024-07-26 14:19:47.634125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:30.899 [2024-07-26 14:19:47.634139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:30.899 [2024-07-26 14:19:47.634197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.899 [2024-07-26 14:19:47.634216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.899 [2024-07-26 14:19:47.634229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.899 [2024-07-26 14:19:47.635094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.635973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.635988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.636968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.636988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.637003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.637022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.637038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.637057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.899 [2024-07-26 14:19:47.637073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.899 [2024-07-26 14:19:47.637091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.637108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.637130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.637147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.637166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.637182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.637200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.637215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.637233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.637249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.637267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.637283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.637301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.637318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.637335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.637351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.637368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a23e20 is same with the state(5) to be set 00:25:30.900 [2024-07-26 14:19:47.638802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.638829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.638853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.638870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.638888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.638905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.638923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.638939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.638970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.638988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.639967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.639985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.900 [2024-07-26 14:19:47.640576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.900 [2024-07-26 14:19:47.640591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.640608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.640624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.640643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.640658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.640676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.640692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.655708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.655775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.655808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.655825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.655843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.655860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.655880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.655896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.655914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.655930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.655948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.655964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.655982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.655998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.656018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.656034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.656052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.656068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.656086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.656101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.656119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a25040 is same with the state(5) to be set 00:25:30.901 [2024-07-26 14:19:47.657667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.657722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.657758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.657799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.657835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.657870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.657904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.657937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.657971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.657987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.901 [2024-07-26 14:19:47.658949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.901 [2024-07-26 14:19:47.658965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.658983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.658999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.659904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.659921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a28fa0 is same with the state(5) to be set 00:25:30.902 [2024-07-26 14:19:47.661312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.661976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.661993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.902 [2024-07-26 14:19:47.662402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.902 [2024-07-26 14:19:47.662418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.662974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.662990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.663594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.663615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a2a4d0 is same with the state(5) to be set 00:25:30.903 [2024-07-26 14:19:47.665020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.903 [2024-07-26 14:19:47.665830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.903 [2024-07-26 14:19:47.665848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.665863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.665880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.665904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.665923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.665939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.665957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.665972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.665990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.666547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.666562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.681974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.681992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.682008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.682026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.682042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.682060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.682076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.682094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.682110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.682128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.682144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.682166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.682183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.682201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.682217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.682236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x289b2f0 is same with the state(5) to be set 00:25:30.904 [2024-07-26 14:19:47.683791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.683819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.683850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.683868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.683886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.683903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.683921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.683938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.683956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.683972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.683990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.684006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.684024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.684041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.684058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.904 [2024-07-26 14:19:47.684074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.904 [2024-07-26 14:19:47.684092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.684971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.684987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.685980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.685996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.686014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.905 [2024-07-26 14:19:47.686030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.905 [2024-07-26 14:19:47.686046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x289c830 is same with the state(5) to be set 00:25:30.905 [2024-07-26 14:19:47.688722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.905 [2024-07-26 14:19:47.688764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:30.905 [2024-07-26 14:19:47.688786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:30.905 [2024-07-26 14:19:47.688805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:30.905 [2024-07-26 14:19:47.688959] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:30.905 [2024-07-26 14:19:47.688993] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:30.905 [2024-07-26 14:19:47.689101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:30.905 task offset: 17408 on job bdev=Nvme10n1 fails 00:25:30.905 00:25:30.905 Latency(us) 00:25:30.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.905 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.905 Job: Nvme1n1 ended in about 0.98 seconds with error 00:25:30.905 Verification LBA range: start 0x0 length 0x400 00:25:30.905 Nvme1n1 : 0.98 135.27 8.45 65.58 0.00 314979.49 21845.33 306028.85 00:25:30.906 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme2n1 ended in about 0.99 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme2n1 : 0.99 128.70 8.04 64.35 0.00 321141.32 23204.60 287387.50 00:25:30.906 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme3n1 ended in about 0.96 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme3n1 : 0.96 199.19 12.45 66.40 0.00 228050.68 19418.07 284280.60 00:25:30.906 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme4n1 ended in about 0.97 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme4n1 : 0.97 198.94 12.43 66.31 0.00 223277.89 23787.14 279620.27 00:25:30.906 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme5n1 ended in about 1.00 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme5n1 : 1.00 128.22 8.01 64.11 0.00 302129.30 22524.97 302921.96 00:25:30.906 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme6n1 ended in about 1.00 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme6n1 : 1.00 127.75 7.98 63.87 0.00 296878.08 23107.51 304475.40 00:25:30.906 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme7n1 ended in about 1.02 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme7n1 : 1.02 125.41 7.84 62.71 0.00 296463.42 37671.06 278066.82 00:25:30.906 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme8n1 ended in about 1.02 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme8n1 : 1.02 124.95 7.81 62.48 0.00 291197.53 19903.53 274959.93 00:25:30.906 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme9n1 ended in about 0.97 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme9n1 : 0.97 132.41 8.28 66.21 0.00 265413.97 21651.15 293601.28 00:25:30.906 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:30.906 Job: Nvme10n1 ended in about 0.96 seconds with error 00:25:30.906 Verification LBA range: start 0x0 length 0x400 00:25:30.906 Nvme10n1 : 0.96 133.04 8.32 66.52 0.00 257558.76 24758.04 324670.20 00:25:30.906 =================================================================================================================== 00:25:30.906 Total : 1433.88 89.62 648.54 0.00 276406.58 19418.07 324670.20 00:25:30.906 [2024-07-26 14:19:47.721972] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:30.906 [2024-07-26 14:19:47.722073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:30.906 [2024-07-26 14:19:47.722601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.722664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2494200 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.722688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2494200 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.722978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.723025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a72990 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.723044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a72990 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.723317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.723364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28d2ba0 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.723382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d2ba0 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.723599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.723629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a72240 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.723648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a72240 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.725594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:30.906 [2024-07-26 14:19:47.725627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:30.906 [2024-07-26 14:19:47.725648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:30.906 [2024-07-26 14:19:47.725667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:30.906 [2024-07-26 14:19:47.726016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.726080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a8610 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.726100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a8610 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.726360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.726408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a6c220 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.726434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a6c220 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.726464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2494200 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.726491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a72990 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.726512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d2ba0 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.726531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a72240 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.726601] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:30.906 [2024-07-26 14:19:47.726627] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:30.906 [2024-07-26 14:19:47.726652] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:30.906 [2024-07-26 14:19:47.726676] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:30.906 [2024-07-26 14:19:47.727037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.727088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a405a0 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.727107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a405a0 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.727375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.727422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a61390 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.727458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a61390 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.727667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.727696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28d6420 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.727714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d6420 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.727875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.906 [2024-07-26 14:19:47.727923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28d3120 with addr=10.0.0.2, port=4420 00:25:30.906 [2024-07-26 14:19:47.727942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d3120 is same with the state(5) to be set 00:25:30.906 [2024-07-26 14:19:47.727963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a8610 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.727984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a6c220 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.728003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a405a0 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.728402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a61390 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.728422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d6420 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.728455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d3120 (9): Bad file descriptor 00:25:30.906 [2024-07-26 14:19:47.728475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:30.906 [2024-07-26 14:19:47.728787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:30.906 [2024-07-26 14:19:47.728801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:30.906 [2024-07-26 14:19:47.728849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.906 [2024-07-26 14:19:47.728896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:25:31.483 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2578581 00:25:32.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2578581) - No such process 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.420 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.420 rmmod nvme_tcp 00:25:32.679 rmmod nvme_fabrics 00:25:32.679 rmmod nvme_keyring 00:25:32.679 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.679 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:25:32.679 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:25:32.679 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:32.680 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.680 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:32.680 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:32.680 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.680 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.680 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.680 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.680 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:34.582 00:25:34.582 real 0m8.410s 00:25:34.582 user 0m22.149s 00:25:34.582 sys 0m1.665s 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:34.582 ************************************ 00:25:34.582 END TEST nvmf_shutdown_tc3 00:25:34.582 ************************************ 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:25:34.582 00:25:34.582 real 0m30.034s 00:25:34.582 user 1m24.069s 00:25:34.582 sys 0m7.488s 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:34.582 ************************************ 00:25:34.582 END TEST nvmf_shutdown 00:25:34.582 ************************************ 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:25:34.582 00:25:34.582 real 12m10.679s 00:25:34.582 user 29m16.506s 00:25:34.582 sys 2m56.825s 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:34.582 14:19:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:34.582 ************************************ 00:25:34.583 END TEST nvmf_target_extra 00:25:34.583 ************************************ 00:25:34.841 14:19:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:34.841 14:19:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:34.841 14:19:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:34.841 14:19:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:34.841 ************************************ 00:25:34.841 START TEST nvmf_host 00:25:34.841 ************************************ 00:25:34.841 14:19:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:34.841 * Looking for test storage... 00:25:34.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.842 ************************************ 00:25:34.842 START TEST nvmf_multicontroller 00:25:34.842 ************************************ 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:34.842 * Looking for test storage... 00:25:34.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:25:34.842 14:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.127 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:38.128 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:38.128 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:38.128 Found net devices under 0000:84:00.0: cvl_0_0 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:38.128 Found net devices under 0000:84:00.1: cvl_0_1 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:38.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:25:38.128 00:25:38.128 --- 10.0.0.2 ping statistics --- 00:25:38.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.128 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:25:38.128 00:25:38.128 --- 10.0.0.1 ping statistics --- 00:25:38.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.128 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2581267 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2581267 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2581267 ']' 00:25:38.128 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.129 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.129 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.129 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.129 14:19:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.129 [2024-07-26 14:19:54.641937] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:38.129 [2024-07-26 14:19:54.642044] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.129 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.129 [2024-07-26 14:19:54.735220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:38.129 [2024-07-26 14:19:54.878509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.129 [2024-07-26 14:19:54.878587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.129 [2024-07-26 14:19:54.878607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.129 [2024-07-26 14:19:54.878624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.129 [2024-07-26 14:19:54.878639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.129 [2024-07-26 14:19:54.878739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.129 [2024-07-26 14:19:54.878803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.129 [2024-07-26 14:19:54.878806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.387 [2024-07-26 14:19:55.047336] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.387 Malloc0 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:38.387 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 [2024-07-26 14:19:55.110823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 [2024-07-26 14:19:55.118637] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 Malloc1 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2581294 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2581294 /var/tmp/bdevperf.sock 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2581294 ']' 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.388 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.953 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.953 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:38.953 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:38.953 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.953 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.953 NVMe0n1 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.954 1 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.954 request: 00:25:38.954 { 00:25:38.954 "name": "NVMe0", 00:25:38.954 "trtype": "tcp", 00:25:38.954 "traddr": "10.0.0.2", 00:25:38.954 "adrfam": "ipv4", 00:25:38.954 "trsvcid": "4420", 00:25:38.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.954 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:38.954 "hostaddr": "10.0.0.2", 00:25:38.954 "hostsvcid": "60000", 00:25:38.954 "prchk_reftag": false, 00:25:38.954 "prchk_guard": false, 00:25:38.954 "hdgst": false, 00:25:38.954 "ddgst": false, 00:25:38.954 "method": "bdev_nvme_attach_controller", 00:25:38.954 "req_id": 1 00:25:38.954 } 00:25:38.954 Got JSON-RPC error response 00:25:38.954 response: 00:25:38.954 { 00:25:38.954 "code": -114, 00:25:38.954 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:38.954 } 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.954 request: 00:25:38.954 { 00:25:38.954 "name": "NVMe0", 00:25:38.954 "trtype": "tcp", 00:25:38.954 "traddr": "10.0.0.2", 00:25:38.954 "adrfam": "ipv4", 00:25:38.954 "trsvcid": "4420", 00:25:38.954 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:38.954 "hostaddr": "10.0.0.2", 00:25:38.954 "hostsvcid": "60000", 00:25:38.954 "prchk_reftag": false, 00:25:38.954 "prchk_guard": false, 00:25:38.954 "hdgst": false, 00:25:38.954 "ddgst": false, 00:25:38.954 "method": "bdev_nvme_attach_controller", 00:25:38.954 "req_id": 1 00:25:38.954 } 00:25:38.954 Got JSON-RPC error response 00:25:38.954 response: 00:25:38.954 { 00:25:38.954 "code": -114, 00:25:38.954 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:38.954 } 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.954 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.247 request: 00:25:39.247 { 00:25:39.247 "name": "NVMe0", 00:25:39.247 "trtype": "tcp", 00:25:39.247 "traddr": "10.0.0.2", 00:25:39.247 "adrfam": "ipv4", 00:25:39.247 "trsvcid": "4420", 00:25:39.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.247 "hostaddr": "10.0.0.2", 00:25:39.247 "hostsvcid": "60000", 00:25:39.247 "prchk_reftag": false, 00:25:39.247 "prchk_guard": false, 00:25:39.247 "hdgst": false, 00:25:39.247 "ddgst": false, 00:25:39.247 "multipath": "disable", 00:25:39.247 "method": "bdev_nvme_attach_controller", 00:25:39.247 "req_id": 1 00:25:39.247 } 00:25:39.247 Got JSON-RPC error response 00:25:39.247 response: 00:25:39.247 { 00:25:39.248 "code": -114, 00:25:39.248 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:39.248 } 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.248 request: 00:25:39.248 { 00:25:39.248 "name": "NVMe0", 00:25:39.248 "trtype": "tcp", 00:25:39.248 "traddr": "10.0.0.2", 00:25:39.248 "adrfam": "ipv4", 00:25:39.248 "trsvcid": "4420", 00:25:39.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.248 "hostaddr": "10.0.0.2", 00:25:39.248 "hostsvcid": "60000", 00:25:39.248 "prchk_reftag": false, 00:25:39.248 "prchk_guard": false, 00:25:39.248 "hdgst": false, 00:25:39.248 "ddgst": false, 00:25:39.248 "multipath": "failover", 00:25:39.248 "method": "bdev_nvme_attach_controller", 00:25:39.248 "req_id": 1 00:25:39.248 } 00:25:39.248 Got JSON-RPC error response 00:25:39.248 response: 00:25:39.248 { 00:25:39.248 "code": -114, 00:25:39.248 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:39.248 } 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.248 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.248 14:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.248 00:25:39.248 14:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.248 14:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.248 14:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:39.248 14:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.248 14:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:39.248 14:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.248 14:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:39.248 14:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:40.617 0 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2581294 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2581294 ']' 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2581294 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2581294 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2581294' 00:25:40.617 killing process with pid 2581294 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2581294 00:25:40.617 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2581294 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:25:40.875 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:40.875 [2024-07-26 14:19:55.237892] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:40.875 [2024-07-26 14:19:55.237999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581294 ] 00:25:40.875 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.875 [2024-07-26 14:19:55.313476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.875 [2024-07-26 14:19:55.435117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.875 [2024-07-26 14:19:56.045311] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 043dcc16-592c-4b75-89be-b39c9a930f62 already exists 00:25:40.875 [2024-07-26 14:19:56.045358] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:043dcc16-592c-4b75-89be-b39c9a930f62 alias for bdev NVMe1n1 00:25:40.875 [2024-07-26 14:19:56.045375] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:40.875 Running I/O for 1 seconds... 00:25:40.875 00:25:40.875 Latency(us) 00:25:40.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.875 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:40.875 NVMe0n1 : 1.01 17191.70 67.16 0.00 0.00 7433.27 3398.16 12913.02 00:25:40.875 =================================================================================================================== 00:25:40.875 Total : 17191.70 67.16 0.00 0.00 7433.27 3398.16 12913.02 00:25:40.875 Received shutdown signal, test time was about 1.000000 seconds 00:25:40.875 00:25:40.875 Latency(us) 00:25:40.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.875 =================================================================================================================== 00:25:40.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:40.875 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.875 rmmod nvme_tcp 00:25:40.875 rmmod nvme_fabrics 00:25:40.875 rmmod nvme_keyring 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2581267 ']' 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2581267 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2581267 ']' 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2581267 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.875 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2581267 00:25:41.133 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:41.133 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:41.133 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2581267' 00:25:41.133 killing process with pid 2581267 00:25:41.133 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2581267 00:25:41.133 14:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2581267 00:25:41.392 14:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:41.392 14:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:41.392 14:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:41.392 14:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.392 14:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:41.392 14:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.392 14:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.392 14:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:43.925 00:25:43.925 real 0m8.581s 00:25:43.925 user 0m13.216s 00:25:43.925 sys 0m3.075s 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:43.925 ************************************ 00:25:43.925 END TEST nvmf_multicontroller 00:25:43.925 ************************************ 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.925 ************************************ 00:25:43.925 START TEST nvmf_aer 00:25:43.925 ************************************ 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:43.925 * Looking for test storage... 00:25:43.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:25:43.925 14:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:46.475 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:46.475 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:46.475 Found net devices under 0000:84:00.0: cvl_0_0 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:46.475 Found net devices under 0000:84:00.1: cvl_0_1 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:25:46.475 00:25:46.475 --- 10.0.0.2 ping statistics --- 00:25:46.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.475 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:25:46.475 00:25:46.475 --- 10.0.0.1 ping statistics --- 00:25:46.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.475 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:46.475 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2583646 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2583646 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2583646 ']' 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.476 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.749 [2024-07-26 14:20:03.358055] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:46.749 [2024-07-26 14:20:03.358137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.749 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.749 [2024-07-26 14:20:03.432808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.749 [2024-07-26 14:20:03.558557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.750 [2024-07-26 14:20:03.558613] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.750 [2024-07-26 14:20:03.558630] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.750 [2024-07-26 14:20:03.558643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.750 [2024-07-26 14:20:03.558655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.750 [2024-07-26 14:20:03.558719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.750 [2024-07-26 14:20:03.558754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.750 [2024-07-26 14:20:03.558810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.750 [2024-07-26 14:20:03.558813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.008 [2024-07-26 14:20:03.731272] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.008 Malloc0 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.008 [2024-07-26 14:20:03.785759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.008 [ 00:25:47.008 { 00:25:47.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:47.008 "subtype": "Discovery", 00:25:47.008 "listen_addresses": [], 00:25:47.008 "allow_any_host": true, 00:25:47.008 "hosts": [] 00:25:47.008 }, 00:25:47.008 { 00:25:47.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.008 "subtype": "NVMe", 00:25:47.008 "listen_addresses": [ 00:25:47.008 { 00:25:47.008 "trtype": "TCP", 00:25:47.008 "adrfam": "IPv4", 00:25:47.008 "traddr": "10.0.0.2", 00:25:47.008 "trsvcid": "4420" 00:25:47.008 } 00:25:47.008 ], 00:25:47.008 "allow_any_host": true, 00:25:47.008 "hosts": [], 00:25:47.008 "serial_number": "SPDK00000000000001", 00:25:47.008 "model_number": "SPDK bdev Controller", 00:25:47.008 "max_namespaces": 2, 00:25:47.008 "min_cntlid": 1, 00:25:47.008 "max_cntlid": 65519, 00:25:47.008 "namespaces": [ 00:25:47.008 { 00:25:47.008 "nsid": 1, 00:25:47.008 "bdev_name": "Malloc0", 00:25:47.008 "name": "Malloc0", 00:25:47.008 "nguid": "98E99C8B4BFC46B398E8E30EB28D2ADA", 00:25:47.008 "uuid": "98e99c8b-4bfc-46b3-98e8-e30eb28d2ada" 00:25:47.008 } 00:25:47.008 ] 00:25:47.008 } 00:25:47.008 ] 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2583793 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:25:47.008 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:47.266 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:47.266 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:25:47.266 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:25:47.266 14:20:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:47.266 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.266 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:47.266 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:25:47.266 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:25:47.266 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:47.266 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:47.266 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:25:47.266 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:25:47.266 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.524 Malloc1 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.524 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.524 [ 00:25:47.524 { 00:25:47.524 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:47.524 "subtype": "Discovery", 00:25:47.524 "listen_addresses": [], 00:25:47.524 "allow_any_host": true, 00:25:47.524 "hosts": [] 00:25:47.524 }, 00:25:47.524 { 00:25:47.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.524 "subtype": "NVMe", 00:25:47.524 "listen_addresses": [ 00:25:47.524 { 00:25:47.524 "trtype": "TCP", 00:25:47.524 "adrfam": "IPv4", 00:25:47.524 "traddr": "10.0.0.2", 00:25:47.524 "trsvcid": "4420" 00:25:47.524 } 00:25:47.524 ], 00:25:47.524 "allow_any_host": true, 00:25:47.524 "hosts": [], 00:25:47.524 "serial_number": "SPDK00000000000001", 00:25:47.524 "model_number": "SPDK bdev Controller", 00:25:47.524 "max_namespaces": 2, 00:25:47.524 "min_cntlid": 1, 00:25:47.524 "max_cntlid": 65519, 00:25:47.524 "namespaces": [ 00:25:47.524 { 00:25:47.524 "nsid": 1, 00:25:47.524 "bdev_name": "Malloc0", 00:25:47.525 "name": "Malloc0", 00:25:47.525 "nguid": "98E99C8B4BFC46B398E8E30EB28D2ADA", 00:25:47.525 "uuid": "98e99c8b-4bfc-46b3-98e8-e30eb28d2ada" 00:25:47.525 }, 00:25:47.525 { 00:25:47.525 "nsid": 2, 00:25:47.525 "bdev_name": "Malloc1", 00:25:47.525 "name": "Malloc1", 00:25:47.525 "nguid": "72FCFE4A846E43F680BBFD77E150307C", 00:25:47.525 "uuid": "72fcfe4a-846e-43f6-80bb-fd77e150307c" 00:25:47.525 } 00:25:47.525 ] 00:25:47.525 } 00:25:47.525 ] 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2583793 00:25:47.525 Asynchronous Event Request test 00:25:47.525 Attaching to 10.0.0.2 00:25:47.525 Attached to 10.0.0.2 00:25:47.525 Registering asynchronous event callbacks... 00:25:47.525 Starting namespace attribute notice tests for all controllers... 00:25:47.525 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:47.525 aer_cb - Changed Namespace 00:25:47.525 Cleaning up... 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.525 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:47.525 rmmod nvme_tcp 00:25:47.783 rmmod nvme_fabrics 00:25:47.783 rmmod nvme_keyring 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2583646 ']' 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2583646 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2583646 ']' 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2583646 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2583646 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2583646' 00:25:47.783 killing process with pid 2583646 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2583646 00:25:47.783 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2583646 00:25:48.042 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:48.042 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:48.042 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:48.042 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.042 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.042 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.042 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.042 14:20:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:50.575 00:25:50.575 real 0m6.572s 00:25:50.575 user 0m5.676s 00:25:50.575 sys 0m2.676s 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.575 ************************************ 00:25:50.575 END TEST nvmf_aer 00:25:50.575 ************************************ 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.575 ************************************ 00:25:50.575 START TEST nvmf_async_init 00:25:50.575 ************************************ 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:50.575 * Looking for test storage... 00:25:50.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.575 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ed658071bd4b4412a78024ff697e9a66 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:50.576 14:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:53.109 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:53.109 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:53.109 Found net devices under 0000:84:00.0: cvl_0_0 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:53.109 Found net devices under 0000:84:00.1: cvl_0_1 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.109 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:53.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:25:53.110 00:25:53.110 --- 10.0.0.2 ping statistics --- 00:25:53.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.110 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:53.110 00:25:53.110 --- 10.0.0.1 ping statistics --- 00:25:53.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.110 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2585873 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2585873 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2585873 ']' 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.110 14:20:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.110 [2024-07-26 14:20:09.929011] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:53.110 [2024-07-26 14:20:09.929109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.110 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.368 [2024-07-26 14:20:10.013220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.368 [2024-07-26 14:20:10.138444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.368 [2024-07-26 14:20:10.138508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.368 [2024-07-26 14:20:10.138524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.368 [2024-07-26 14:20:10.138537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.368 [2024-07-26 14:20:10.138548] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.368 [2024-07-26 14:20:10.138581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.637 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:53.637 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:25:53.637 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:53.637 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:53.637 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.637 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.637 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.638 [2024-07-26 14:20:10.284536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.638 null0 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ed658071bd4b4412a78024ff697e9a66 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.638 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.638 [2024-07-26 14:20:10.324827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.639 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.639 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:53.639 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.639 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.904 nvme0n1 00:25:53.904 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.904 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:53.904 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.904 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.904 [ 00:25:53.904 { 00:25:53.904 "name": "nvme0n1", 00:25:53.904 "aliases": [ 00:25:53.904 "ed658071-bd4b-4412-a780-24ff697e9a66" 00:25:53.904 ], 00:25:53.904 "product_name": "NVMe disk", 00:25:53.904 "block_size": 512, 00:25:53.904 "num_blocks": 2097152, 00:25:53.904 "uuid": "ed658071-bd4b-4412-a780-24ff697e9a66", 00:25:53.904 "assigned_rate_limits": { 00:25:53.904 "rw_ios_per_sec": 0, 00:25:53.904 "rw_mbytes_per_sec": 0, 00:25:53.904 "r_mbytes_per_sec": 0, 00:25:53.904 "w_mbytes_per_sec": 0 00:25:53.904 }, 00:25:53.904 "claimed": false, 00:25:53.904 "zoned": false, 00:25:53.904 "supported_io_types": { 00:25:53.904 "read": true, 00:25:53.904 "write": true, 00:25:53.904 "unmap": false, 00:25:53.904 "flush": true, 00:25:53.904 "reset": true, 00:25:53.904 "nvme_admin": true, 00:25:53.904 "nvme_io": true, 00:25:53.904 "nvme_io_md": false, 00:25:53.904 "write_zeroes": true, 00:25:53.904 "zcopy": false, 00:25:53.904 "get_zone_info": false, 00:25:53.904 "zone_management": false, 00:25:53.904 "zone_append": false, 00:25:53.904 "compare": true, 00:25:53.904 "compare_and_write": true, 00:25:53.904 "abort": true, 00:25:53.904 "seek_hole": false, 00:25:53.904 "seek_data": false, 00:25:53.904 "copy": true, 00:25:53.904 "nvme_iov_md": false 00:25:53.904 }, 00:25:53.904 "memory_domains": [ 00:25:53.904 { 00:25:53.904 "dma_device_id": "system", 00:25:53.904 "dma_device_type": 1 00:25:53.904 } 00:25:53.904 ], 00:25:53.904 "driver_specific": { 00:25:53.904 "nvme": [ 00:25:53.904 { 00:25:53.904 "trid": { 00:25:53.904 "trtype": "TCP", 00:25:53.904 "adrfam": "IPv4", 00:25:53.904 "traddr": "10.0.0.2", 00:25:53.904 "trsvcid": "4420", 00:25:53.904 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:53.904 }, 00:25:53.904 "ctrlr_data": { 00:25:53.904 "cntlid": 1, 00:25:53.904 "vendor_id": "0x8086", 00:25:53.904 "model_number": "SPDK bdev Controller", 00:25:53.904 "serial_number": "00000000000000000000", 00:25:53.904 "firmware_revision": "24.09", 00:25:53.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.904 "oacs": { 00:25:53.904 "security": 0, 00:25:53.904 "format": 0, 00:25:53.904 "firmware": 0, 00:25:53.904 "ns_manage": 0 00:25:53.904 }, 00:25:53.904 "multi_ctrlr": true, 00:25:53.904 "ana_reporting": false 00:25:53.904 }, 00:25:53.904 "vs": { 00:25:53.904 "nvme_version": "1.3" 00:25:53.904 }, 00:25:53.904 "ns_data": { 00:25:53.904 "id": 1, 00:25:53.904 "can_share": true 00:25:53.904 } 00:25:53.904 } 00:25:53.904 ], 00:25:53.904 "mp_policy": "active_passive" 00:25:53.904 } 00:25:53.904 } 00:25:53.904 ] 00:25:53.904 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.904 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:53.904 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.904 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.904 [2024-07-26 14:20:10.577341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:53.905 [2024-07-26 14:20:10.577446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172f740 (9): Bad file descriptor 00:25:53.905 [2024-07-26 14:20:10.719589] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.905 [ 00:25:53.905 { 00:25:53.905 "name": "nvme0n1", 00:25:53.905 "aliases": [ 00:25:53.905 "ed658071-bd4b-4412-a780-24ff697e9a66" 00:25:53.905 ], 00:25:53.905 "product_name": "NVMe disk", 00:25:53.905 "block_size": 512, 00:25:53.905 "num_blocks": 2097152, 00:25:53.905 "uuid": "ed658071-bd4b-4412-a780-24ff697e9a66", 00:25:53.905 "assigned_rate_limits": { 00:25:53.905 "rw_ios_per_sec": 0, 00:25:53.905 "rw_mbytes_per_sec": 0, 00:25:53.905 "r_mbytes_per_sec": 0, 00:25:53.905 "w_mbytes_per_sec": 0 00:25:53.905 }, 00:25:53.905 "claimed": false, 00:25:53.905 "zoned": false, 00:25:53.905 "supported_io_types": { 00:25:53.905 "read": true, 00:25:53.905 "write": true, 00:25:53.905 "unmap": false, 00:25:53.905 "flush": true, 00:25:53.905 "reset": true, 00:25:53.905 "nvme_admin": true, 00:25:53.905 "nvme_io": true, 00:25:53.905 "nvme_io_md": false, 00:25:53.905 "write_zeroes": true, 00:25:53.905 "zcopy": false, 00:25:53.905 "get_zone_info": false, 00:25:53.905 "zone_management": false, 00:25:53.905 "zone_append": false, 00:25:53.905 "compare": true, 00:25:53.905 "compare_and_write": true, 00:25:53.905 "abort": true, 00:25:53.905 "seek_hole": false, 00:25:53.905 "seek_data": false, 00:25:53.905 "copy": true, 00:25:53.905 "nvme_iov_md": false 00:25:53.905 }, 00:25:53.905 "memory_domains": [ 00:25:53.905 { 00:25:53.905 "dma_device_id": "system", 00:25:53.905 "dma_device_type": 1 00:25:53.905 } 00:25:53.905 ], 00:25:53.905 "driver_specific": { 00:25:53.905 "nvme": [ 00:25:53.905 { 00:25:53.905 "trid": { 00:25:53.905 "trtype": "TCP", 00:25:53.905 "adrfam": "IPv4", 00:25:53.905 "traddr": "10.0.0.2", 00:25:53.905 "trsvcid": "4420", 00:25:53.905 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:53.905 }, 00:25:53.905 "ctrlr_data": { 00:25:53.905 "cntlid": 2, 00:25:53.905 "vendor_id": "0x8086", 00:25:53.905 "model_number": "SPDK bdev Controller", 00:25:53.905 "serial_number": "00000000000000000000", 00:25:53.905 "firmware_revision": "24.09", 00:25:53.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.905 "oacs": { 00:25:53.905 "security": 0, 00:25:53.905 "format": 0, 00:25:53.905 "firmware": 0, 00:25:53.905 "ns_manage": 0 00:25:53.905 }, 00:25:53.905 "multi_ctrlr": true, 00:25:53.905 "ana_reporting": false 00:25:53.905 }, 00:25:53.905 "vs": { 00:25:53.905 "nvme_version": "1.3" 00:25:53.905 }, 00:25:53.905 "ns_data": { 00:25:53.905 "id": 1, 00:25:53.905 "can_share": true 00:25:53.905 } 00:25:53.905 } 00:25:53.905 ], 00:25:53.905 "mp_policy": "active_passive" 00:25:53.905 } 00:25:53.905 } 00:25:53.905 ] 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.x77R61AxHR 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.x77R61AxHR 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.905 [2024-07-26 14:20:10.770022] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:53.905 [2024-07-26 14:20:10.770160] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x77R61AxHR 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.905 [2024-07-26 14:20:10.778032] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x77R61AxHR 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.905 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:53.905 [2024-07-26 14:20:10.786059] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:53.905 [2024-07-26 14:20:10.786127] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:54.163 nvme0n1 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:54.163 [ 00:25:54.163 { 00:25:54.163 "name": "nvme0n1", 00:25:54.163 "aliases": [ 00:25:54.163 "ed658071-bd4b-4412-a780-24ff697e9a66" 00:25:54.163 ], 00:25:54.163 "product_name": "NVMe disk", 00:25:54.163 "block_size": 512, 00:25:54.163 "num_blocks": 2097152, 00:25:54.163 "uuid": "ed658071-bd4b-4412-a780-24ff697e9a66", 00:25:54.163 "assigned_rate_limits": { 00:25:54.163 "rw_ios_per_sec": 0, 00:25:54.163 "rw_mbytes_per_sec": 0, 00:25:54.163 "r_mbytes_per_sec": 0, 00:25:54.163 "w_mbytes_per_sec": 0 00:25:54.163 }, 00:25:54.163 "claimed": false, 00:25:54.163 "zoned": false, 00:25:54.163 "supported_io_types": { 00:25:54.163 "read": true, 00:25:54.163 "write": true, 00:25:54.163 "unmap": false, 00:25:54.163 "flush": true, 00:25:54.163 "reset": true, 00:25:54.163 "nvme_admin": true, 00:25:54.163 "nvme_io": true, 00:25:54.163 "nvme_io_md": false, 00:25:54.163 "write_zeroes": true, 00:25:54.163 "zcopy": false, 00:25:54.163 "get_zone_info": false, 00:25:54.163 "zone_management": false, 00:25:54.163 "zone_append": false, 00:25:54.163 "compare": true, 00:25:54.163 "compare_and_write": true, 00:25:54.163 "abort": true, 00:25:54.163 "seek_hole": false, 00:25:54.163 "seek_data": false, 00:25:54.163 "copy": true, 00:25:54.163 "nvme_iov_md": false 00:25:54.163 }, 00:25:54.163 "memory_domains": [ 00:25:54.163 { 00:25:54.163 "dma_device_id": "system", 00:25:54.163 "dma_device_type": 1 00:25:54.163 } 00:25:54.163 ], 00:25:54.163 "driver_specific": { 00:25:54.163 "nvme": [ 00:25:54.163 { 00:25:54.163 "trid": { 00:25:54.163 "trtype": "TCP", 00:25:54.163 "adrfam": "IPv4", 00:25:54.163 "traddr": "10.0.0.2", 00:25:54.163 "trsvcid": "4421", 00:25:54.163 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:54.163 }, 00:25:54.163 "ctrlr_data": { 00:25:54.163 "cntlid": 3, 00:25:54.163 "vendor_id": "0x8086", 00:25:54.163 "model_number": "SPDK bdev Controller", 00:25:54.163 "serial_number": "00000000000000000000", 00:25:54.163 "firmware_revision": "24.09", 00:25:54.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:54.163 "oacs": { 00:25:54.163 "security": 0, 00:25:54.163 "format": 0, 00:25:54.163 "firmware": 0, 00:25:54.163 "ns_manage": 0 00:25:54.163 }, 00:25:54.163 "multi_ctrlr": true, 00:25:54.163 "ana_reporting": false 00:25:54.163 }, 00:25:54.163 "vs": { 00:25:54.163 "nvme_version": "1.3" 00:25:54.163 }, 00:25:54.163 "ns_data": { 00:25:54.163 "id": 1, 00:25:54.163 "can_share": true 00:25:54.163 } 00:25:54.163 } 00:25:54.163 ], 00:25:54.163 "mp_policy": "active_passive" 00:25:54.163 } 00:25:54.163 } 00:25:54.163 ] 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.x77R61AxHR 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:54.163 rmmod nvme_tcp 00:25:54.163 rmmod nvme_fabrics 00:25:54.163 rmmod nvme_keyring 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2585873 ']' 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2585873 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2585873 ']' 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2585873 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2585873 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2585873' 00:25:54.163 killing process with pid 2585873 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2585873 00:25:54.163 [2024-07-26 14:20:10.976256] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:54.163 [2024-07-26 14:20:10.976293] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:54.163 14:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2585873 00:25:54.421 14:20:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:54.421 14:20:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:54.421 14:20:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:54.421 14:20:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.421 14:20:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.421 14:20:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.421 14:20:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.421 14:20:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:56.955 00:25:56.955 real 0m6.421s 00:25:56.955 user 0m2.412s 00:25:56.955 sys 0m2.435s 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.955 ************************************ 00:25:56.955 END TEST nvmf_async_init 00:25:56.955 ************************************ 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.955 ************************************ 00:25:56.955 START TEST dma 00:25:56.955 ************************************ 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:56.955 * Looking for test storage... 00:25:56.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:56.955 00:25:56.955 real 0m0.087s 00:25:56.955 user 0m0.041s 00:25:56.955 sys 0m0.053s 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:56.955 ************************************ 00:25:56.955 END TEST dma 00:25:56.955 ************************************ 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.955 ************************************ 00:25:56.955 START TEST nvmf_identify 00:25:56.955 ************************************ 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:56.955 * Looking for test storage... 00:25:56.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.955 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:56.956 14:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:59.492 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:59.492 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:59.492 Found net devices under 0000:84:00.0: cvl_0_0 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.492 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:59.493 Found net devices under 0000:84:00.1: cvl_0_1 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:25:59.493 00:25:59.493 --- 10.0.0.2 ping statistics --- 00:25:59.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.493 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:25:59.493 00:25:59.493 --- 10.0.0.1 ping statistics --- 00:25:59.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.493 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2588010 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2588010 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2588010 ']' 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.493 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:59.493 [2024-07-26 14:20:16.365212] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:25:59.493 [2024-07-26 14:20:16.365391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.752 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.752 [2024-07-26 14:20:16.469734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.752 [2024-07-26 14:20:16.598655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.752 [2024-07-26 14:20:16.598722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.752 [2024-07-26 14:20:16.598740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.752 [2024-07-26 14:20:16.598753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.752 [2024-07-26 14:20:16.598765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.752 [2024-07-26 14:20:16.598847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.752 [2024-07-26 14:20:16.601456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.752 [2024-07-26 14:20:16.601519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:59.752 [2024-07-26 14:20:16.601524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.010 [2024-07-26 14:20:16.745075] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.010 Malloc0 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.010 [2024-07-26 14:20:16.827562] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.010 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.011 [ 00:26:00.011 { 00:26:00.011 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:00.011 "subtype": "Discovery", 00:26:00.011 "listen_addresses": [ 00:26:00.011 { 00:26:00.011 "trtype": "TCP", 00:26:00.011 "adrfam": "IPv4", 00:26:00.011 "traddr": "10.0.0.2", 00:26:00.011 "trsvcid": "4420" 00:26:00.011 } 00:26:00.011 ], 00:26:00.011 "allow_any_host": true, 00:26:00.011 "hosts": [] 00:26:00.011 }, 00:26:00.011 { 00:26:00.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.011 "subtype": "NVMe", 00:26:00.011 "listen_addresses": [ 00:26:00.011 { 00:26:00.011 "trtype": "TCP", 00:26:00.011 "adrfam": "IPv4", 00:26:00.011 "traddr": "10.0.0.2", 00:26:00.011 "trsvcid": "4420" 00:26:00.011 } 00:26:00.011 ], 00:26:00.011 "allow_any_host": true, 00:26:00.011 "hosts": [], 00:26:00.011 "serial_number": "SPDK00000000000001", 00:26:00.011 "model_number": "SPDK bdev Controller", 00:26:00.011 "max_namespaces": 32, 00:26:00.011 "min_cntlid": 1, 00:26:00.011 "max_cntlid": 65519, 00:26:00.011 "namespaces": [ 00:26:00.011 { 00:26:00.011 "nsid": 1, 00:26:00.011 "bdev_name": "Malloc0", 00:26:00.011 "name": "Malloc0", 00:26:00.011 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:00.011 "eui64": "ABCDEF0123456789", 00:26:00.011 "uuid": "52935f02-0c7e-465d-88fa-c30c9c808001" 00:26:00.011 } 00:26:00.011 ] 00:26:00.011 } 00:26:00.011 ] 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.011 14:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:00.011 [2024-07-26 14:20:16.871098] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:26:00.011 [2024-07-26 14:20:16.871142] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588164 ] 00:26:00.011 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.272 [2024-07-26 14:20:16.909142] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:00.272 [2024-07-26 14:20:16.909219] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:00.272 [2024-07-26 14:20:16.909230] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:00.272 [2024-07-26 14:20:16.909248] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:00.272 [2024-07-26 14:20:16.909263] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:00.272 [2024-07-26 14:20:16.909633] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:00.272 [2024-07-26 14:20:16.909694] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa31540 0 00:26:00.272 [2024-07-26 14:20:16.920441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:00.272 [2024-07-26 14:20:16.920474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:00.272 [2024-07-26 14:20:16.920486] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:00.272 [2024-07-26 14:20:16.920493] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:00.272 [2024-07-26 14:20:16.920557] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.920571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.920580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.272 [2024-07-26 14:20:16.920604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:00.272 [2024-07-26 14:20:16.920635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.272 [2024-07-26 14:20:16.927443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.272 [2024-07-26 14:20:16.927464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.272 [2024-07-26 14:20:16.927472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.927487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.272 [2024-07-26 14:20:16.927514] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:00.272 [2024-07-26 14:20:16.927528] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:00.272 [2024-07-26 14:20:16.927539] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:00.272 [2024-07-26 14:20:16.927564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.927574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.927581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.272 [2024-07-26 14:20:16.927594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-07-26 14:20:16.927621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.272 [2024-07-26 14:20:16.927800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.272 [2024-07-26 14:20:16.927813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.272 [2024-07-26 14:20:16.927820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.927828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.272 [2024-07-26 14:20:16.927843] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:00.272 [2024-07-26 14:20:16.927858] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:00.272 [2024-07-26 14:20:16.927871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.927879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.927886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.272 [2024-07-26 14:20:16.927898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-07-26 14:20:16.927922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.272 [2024-07-26 14:20:16.928054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.272 [2024-07-26 14:20:16.928071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.272 [2024-07-26 14:20:16.928078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.272 [2024-07-26 14:20:16.928095] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:00.272 [2024-07-26 14:20:16.928111] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:00.272 [2024-07-26 14:20:16.928124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.272 [2024-07-26 14:20:16.928151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-07-26 14:20:16.928174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.272 [2024-07-26 14:20:16.928350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.272 [2024-07-26 14:20:16.928363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.272 [2024-07-26 14:20:16.928370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.272 [2024-07-26 14:20:16.928393] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:00.272 [2024-07-26 14:20:16.928411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.272 [2024-07-26 14:20:16.928449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-07-26 14:20:16.928472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.272 [2024-07-26 14:20:16.928649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.272 [2024-07-26 14:20:16.928662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.272 [2024-07-26 14:20:16.928669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.272 [2024-07-26 14:20:16.928686] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:00.272 [2024-07-26 14:20:16.928695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:00.272 [2024-07-26 14:20:16.928710] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:00.272 [2024-07-26 14:20:16.928822] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:00.272 [2024-07-26 14:20:16.928831] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:00.272 [2024-07-26 14:20:16.928848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.928863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.272 [2024-07-26 14:20:16.928874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-07-26 14:20:16.928897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.272 [2024-07-26 14:20:16.929071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.272 [2024-07-26 14:20:16.929084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.272 [2024-07-26 14:20:16.929091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.929098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.272 [2024-07-26 14:20:16.929107] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:00.272 [2024-07-26 14:20:16.929124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.929133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.272 [2024-07-26 14:20:16.929140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.272 [2024-07-26 14:20:16.929152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-07-26 14:20:16.929174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.272 [2024-07-26 14:20:16.929304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.272 [2024-07-26 14:20:16.929320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.273 [2024-07-26 14:20:16.929332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.273 [2024-07-26 14:20:16.929349] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:00.273 [2024-07-26 14:20:16.929358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:00.273 [2024-07-26 14:20:16.929373] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:00.273 [2024-07-26 14:20:16.929393] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:00.273 [2024-07-26 14:20:16.929412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.929441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.273 [2024-07-26 14:20:16.929467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.273 [2024-07-26 14:20:16.929648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.273 [2024-07-26 14:20:16.929665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.273 [2024-07-26 14:20:16.929673] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929681] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa31540): datao=0, datal=4096, cccid=0 00:26:00.273 [2024-07-26 14:20:16.929689] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa913c0) on tqpair(0xa31540): expected_datao=0, payload_size=4096 00:26:00.273 [2024-07-26 14:20:16.929698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929710] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929719] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.273 [2024-07-26 14:20:16.929774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.273 [2024-07-26 14:20:16.929781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.273 [2024-07-26 14:20:16.929801] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:00.273 [2024-07-26 14:20:16.929811] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:00.273 [2024-07-26 14:20:16.929819] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:00.273 [2024-07-26 14:20:16.929829] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:00.273 [2024-07-26 14:20:16.929839] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:00.273 [2024-07-26 14:20:16.929848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:00.273 [2024-07-26 14:20:16.929864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:00.273 [2024-07-26 14:20:16.929883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.929899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.929915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:00.273 [2024-07-26 14:20:16.929939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.273 [2024-07-26 14:20:16.930116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.273 [2024-07-26 14:20:16.930129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.273 [2024-07-26 14:20:16.930137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.273 [2024-07-26 14:20:16.930158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.930184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.273 [2024-07-26 14:20:16.930194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.930218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.273 [2024-07-26 14:20:16.930229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.930253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.273 [2024-07-26 14:20:16.930263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.930287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.273 [2024-07-26 14:20:16.930297] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:00.273 [2024-07-26 14:20:16.930318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:00.273 [2024-07-26 14:20:16.930332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.930351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.273 [2024-07-26 14:20:16.930375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa913c0, cid 0, qid 0 00:26:00.273 [2024-07-26 14:20:16.930387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91540, cid 1, qid 0 00:26:00.273 [2024-07-26 14:20:16.930395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa916c0, cid 2, qid 0 00:26:00.273 [2024-07-26 14:20:16.930404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91840, cid 3, qid 0 00:26:00.273 [2024-07-26 14:20:16.930412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa919c0, cid 4, qid 0 00:26:00.273 [2024-07-26 14:20:16.930633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.273 [2024-07-26 14:20:16.930654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.273 [2024-07-26 14:20:16.930662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa919c0) on tqpair=0xa31540 00:26:00.273 [2024-07-26 14:20:16.930680] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:00.273 [2024-07-26 14:20:16.930690] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:00.273 [2024-07-26 14:20:16.930710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.930732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.273 [2024-07-26 14:20:16.930756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa919c0, cid 4, qid 0 00:26:00.273 [2024-07-26 14:20:16.930898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.273 [2024-07-26 14:20:16.930914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.273 [2024-07-26 14:20:16.930922] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930929] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa31540): datao=0, datal=4096, cccid=4 00:26:00.273 [2024-07-26 14:20:16.930937] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa919c0) on tqpair(0xa31540): expected_datao=0, payload_size=4096 00:26:00.273 [2024-07-26 14:20:16.930945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930977] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.930987] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.975441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.273 [2024-07-26 14:20:16.975461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.273 [2024-07-26 14:20:16.975470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.975477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa919c0) on tqpair=0xa31540 00:26:00.273 [2024-07-26 14:20:16.975500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:00.273 [2024-07-26 14:20:16.975546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.975558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.975571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.273 [2024-07-26 14:20:16.975584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.975592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.273 [2024-07-26 14:20:16.975599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa31540) 00:26:00.273 [2024-07-26 14:20:16.975609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.273 [2024-07-26 14:20:16.975641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa919c0, cid 4, qid 0 00:26:00.273 [2024-07-26 14:20:16.975655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91b40, cid 5, qid 0 00:26:00.273 [2024-07-26 14:20:16.975928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.274 [2024-07-26 14:20:16.975945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.274 [2024-07-26 14:20:16.975952] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:16.975959] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa31540): datao=0, datal=1024, cccid=4 00:26:00.274 [2024-07-26 14:20:16.975973] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa919c0) on tqpair(0xa31540): expected_datao=0, payload_size=1024 00:26:00.274 [2024-07-26 14:20:16.975982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:16.975992] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:16.976000] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:16.976010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.274 [2024-07-26 14:20:16.976019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.274 [2024-07-26 14:20:16.976027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:16.976034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91b40) on tqpair=0xa31540 00:26:00.274 [2024-07-26 14:20:17.016592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.274 [2024-07-26 14:20:17.016614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.274 [2024-07-26 14:20:17.016622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.016630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa919c0) on tqpair=0xa31540 00:26:00.274 [2024-07-26 14:20:17.016651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.016661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa31540) 00:26:00.274 [2024-07-26 14:20:17.016674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.274 [2024-07-26 14:20:17.016708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa919c0, cid 4, qid 0 00:26:00.274 [2024-07-26 14:20:17.016916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.274 [2024-07-26 14:20:17.016929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.274 [2024-07-26 14:20:17.016937] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.016944] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa31540): datao=0, datal=3072, cccid=4 00:26:00.274 [2024-07-26 14:20:17.016952] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa919c0) on tqpair(0xa31540): expected_datao=0, payload_size=3072 00:26:00.274 [2024-07-26 14:20:17.016960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.016971] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.016979] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.017026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.274 [2024-07-26 14:20:17.017038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.274 [2024-07-26 14:20:17.017045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.017053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa919c0) on tqpair=0xa31540 00:26:00.274 [2024-07-26 14:20:17.017069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.017078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa31540) 00:26:00.274 [2024-07-26 14:20:17.017090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.274 [2024-07-26 14:20:17.017120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa919c0, cid 4, qid 0 00:26:00.274 [2024-07-26 14:20:17.017314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.274 [2024-07-26 14:20:17.017327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.274 [2024-07-26 14:20:17.017334] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.017341] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa31540): datao=0, datal=8, cccid=4 00:26:00.274 [2024-07-26 14:20:17.017350] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa919c0) on tqpair(0xa31540): expected_datao=0, payload_size=8 00:26:00.274 [2024-07-26 14:20:17.017363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.017375] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.017383] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.057573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.274 [2024-07-26 14:20:17.057592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.274 [2024-07-26 14:20:17.057600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.274 [2024-07-26 14:20:17.057608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa919c0) on tqpair=0xa31540 00:26:00.274 ===================================================== 00:26:00.274 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:00.274 ===================================================== 00:26:00.274 Controller Capabilities/Features 00:26:00.274 ================================ 00:26:00.274 Vendor ID: 0000 00:26:00.274 Subsystem Vendor ID: 0000 00:26:00.274 Serial Number: .................... 00:26:00.274 Model Number: ........................................ 00:26:00.274 Firmware Version: 24.09 00:26:00.274 Recommended Arb Burst: 0 00:26:00.274 IEEE OUI Identifier: 00 00 00 00:26:00.274 Multi-path I/O 00:26:00.274 May have multiple subsystem ports: No 00:26:00.274 May have multiple controllers: No 00:26:00.274 Associated with SR-IOV VF: No 00:26:00.274 Max Data Transfer Size: 131072 00:26:00.274 Max Number of Namespaces: 0 00:26:00.274 Max Number of I/O Queues: 1024 00:26:00.274 NVMe Specification Version (VS): 1.3 00:26:00.274 NVMe Specification Version (Identify): 1.3 00:26:00.274 Maximum Queue Entries: 128 00:26:00.274 Contiguous Queues Required: Yes 00:26:00.274 Arbitration Mechanisms Supported 00:26:00.274 Weighted Round Robin: Not Supported 00:26:00.274 Vendor Specific: Not Supported 00:26:00.274 Reset Timeout: 15000 ms 00:26:00.274 Doorbell Stride: 4 bytes 00:26:00.274 NVM Subsystem Reset: Not Supported 00:26:00.274 Command Sets Supported 00:26:00.274 NVM Command Set: Supported 00:26:00.274 Boot Partition: Not Supported 00:26:00.274 Memory Page Size Minimum: 4096 bytes 00:26:00.274 Memory Page Size Maximum: 4096 bytes 00:26:00.274 Persistent Memory Region: Not Supported 00:26:00.274 Optional Asynchronous Events Supported 00:26:00.274 Namespace Attribute Notices: Not Supported 00:26:00.274 Firmware Activation Notices: Not Supported 00:26:00.274 ANA Change Notices: Not Supported 00:26:00.274 PLE Aggregate Log Change Notices: Not Supported 00:26:00.274 LBA Status Info Alert Notices: Not Supported 00:26:00.274 EGE Aggregate Log Change Notices: Not Supported 00:26:00.274 Normal NVM Subsystem Shutdown event: Not Supported 00:26:00.274 Zone Descriptor Change Notices: Not Supported 00:26:00.274 Discovery Log Change Notices: Supported 00:26:00.274 Controller Attributes 00:26:00.274 128-bit Host Identifier: Not Supported 00:26:00.274 Non-Operational Permissive Mode: Not Supported 00:26:00.274 NVM Sets: Not Supported 00:26:00.274 Read Recovery Levels: Not Supported 00:26:00.274 Endurance Groups: Not Supported 00:26:00.274 Predictable Latency Mode: Not Supported 00:26:00.274 Traffic Based Keep ALive: Not Supported 00:26:00.274 Namespace Granularity: Not Supported 00:26:00.274 SQ Associations: Not Supported 00:26:00.274 UUID List: Not Supported 00:26:00.274 Multi-Domain Subsystem: Not Supported 00:26:00.274 Fixed Capacity Management: Not Supported 00:26:00.274 Variable Capacity Management: Not Supported 00:26:00.274 Delete Endurance Group: Not Supported 00:26:00.274 Delete NVM Set: Not Supported 00:26:00.274 Extended LBA Formats Supported: Not Supported 00:26:00.274 Flexible Data Placement Supported: Not Supported 00:26:00.274 00:26:00.274 Controller Memory Buffer Support 00:26:00.274 ================================ 00:26:00.274 Supported: No 00:26:00.274 00:26:00.274 Persistent Memory Region Support 00:26:00.274 ================================ 00:26:00.274 Supported: No 00:26:00.274 00:26:00.274 Admin Command Set Attributes 00:26:00.274 ============================ 00:26:00.274 Security Send/Receive: Not Supported 00:26:00.274 Format NVM: Not Supported 00:26:00.274 Firmware Activate/Download: Not Supported 00:26:00.274 Namespace Management: Not Supported 00:26:00.274 Device Self-Test: Not Supported 00:26:00.274 Directives: Not Supported 00:26:00.274 NVMe-MI: Not Supported 00:26:00.274 Virtualization Management: Not Supported 00:26:00.274 Doorbell Buffer Config: Not Supported 00:26:00.274 Get LBA Status Capability: Not Supported 00:26:00.274 Command & Feature Lockdown Capability: Not Supported 00:26:00.274 Abort Command Limit: 1 00:26:00.274 Async Event Request Limit: 4 00:26:00.274 Number of Firmware Slots: N/A 00:26:00.274 Firmware Slot 1 Read-Only: N/A 00:26:00.274 Firmware Activation Without Reset: N/A 00:26:00.274 Multiple Update Detection Support: N/A 00:26:00.274 Firmware Update Granularity: No Information Provided 00:26:00.274 Per-Namespace SMART Log: No 00:26:00.274 Asymmetric Namespace Access Log Page: Not Supported 00:26:00.274 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:00.274 Command Effects Log Page: Not Supported 00:26:00.274 Get Log Page Extended Data: Supported 00:26:00.274 Telemetry Log Pages: Not Supported 00:26:00.274 Persistent Event Log Pages: Not Supported 00:26:00.274 Supported Log Pages Log Page: May Support 00:26:00.275 Commands Supported & Effects Log Page: Not Supported 00:26:00.275 Feature Identifiers & Effects Log Page:May Support 00:26:00.275 NVMe-MI Commands & Effects Log Page: May Support 00:26:00.275 Data Area 4 for Telemetry Log: Not Supported 00:26:00.275 Error Log Page Entries Supported: 128 00:26:00.275 Keep Alive: Not Supported 00:26:00.275 00:26:00.275 NVM Command Set Attributes 00:26:00.275 ========================== 00:26:00.275 Submission Queue Entry Size 00:26:00.275 Max: 1 00:26:00.275 Min: 1 00:26:00.275 Completion Queue Entry Size 00:26:00.275 Max: 1 00:26:00.275 Min: 1 00:26:00.275 Number of Namespaces: 0 00:26:00.275 Compare Command: Not Supported 00:26:00.275 Write Uncorrectable Command: Not Supported 00:26:00.275 Dataset Management Command: Not Supported 00:26:00.275 Write Zeroes Command: Not Supported 00:26:00.275 Set Features Save Field: Not Supported 00:26:00.275 Reservations: Not Supported 00:26:00.275 Timestamp: Not Supported 00:26:00.275 Copy: Not Supported 00:26:00.275 Volatile Write Cache: Not Present 00:26:00.275 Atomic Write Unit (Normal): 1 00:26:00.275 Atomic Write Unit (PFail): 1 00:26:00.275 Atomic Compare & Write Unit: 1 00:26:00.275 Fused Compare & Write: Supported 00:26:00.275 Scatter-Gather List 00:26:00.275 SGL Command Set: Supported 00:26:00.275 SGL Keyed: Supported 00:26:00.275 SGL Bit Bucket Descriptor: Not Supported 00:26:00.275 SGL Metadata Pointer: Not Supported 00:26:00.275 Oversized SGL: Not Supported 00:26:00.275 SGL Metadata Address: Not Supported 00:26:00.275 SGL Offset: Supported 00:26:00.275 Transport SGL Data Block: Not Supported 00:26:00.275 Replay Protected Memory Block: Not Supported 00:26:00.275 00:26:00.275 Firmware Slot Information 00:26:00.275 ========================= 00:26:00.275 Active slot: 0 00:26:00.275 00:26:00.275 00:26:00.275 Error Log 00:26:00.275 ========= 00:26:00.275 00:26:00.275 Active Namespaces 00:26:00.275 ================= 00:26:00.275 Discovery Log Page 00:26:00.275 ================== 00:26:00.275 Generation Counter: 2 00:26:00.275 Number of Records: 2 00:26:00.275 Record Format: 0 00:26:00.275 00:26:00.275 Discovery Log Entry 0 00:26:00.275 ---------------------- 00:26:00.275 Transport Type: 3 (TCP) 00:26:00.275 Address Family: 1 (IPv4) 00:26:00.275 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:00.275 Entry Flags: 00:26:00.275 Duplicate Returned Information: 1 00:26:00.275 Explicit Persistent Connection Support for Discovery: 1 00:26:00.275 Transport Requirements: 00:26:00.275 Secure Channel: Not Required 00:26:00.275 Port ID: 0 (0x0000) 00:26:00.275 Controller ID: 65535 (0xffff) 00:26:00.275 Admin Max SQ Size: 128 00:26:00.275 Transport Service Identifier: 4420 00:26:00.275 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:00.275 Transport Address: 10.0.0.2 00:26:00.275 Discovery Log Entry 1 00:26:00.275 ---------------------- 00:26:00.275 Transport Type: 3 (TCP) 00:26:00.275 Address Family: 1 (IPv4) 00:26:00.275 Subsystem Type: 2 (NVM Subsystem) 00:26:00.275 Entry Flags: 00:26:00.275 Duplicate Returned Information: 0 00:26:00.275 Explicit Persistent Connection Support for Discovery: 0 00:26:00.275 Transport Requirements: 00:26:00.275 Secure Channel: Not Required 00:26:00.275 Port ID: 0 (0x0000) 00:26:00.275 Controller ID: 65535 (0xffff) 00:26:00.275 Admin Max SQ Size: 128 00:26:00.275 Transport Service Identifier: 4420 00:26:00.275 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:00.275 Transport Address: 10.0.0.2 [2024-07-26 14:20:17.057737] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:00.275 [2024-07-26 14:20:17.057762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa913c0) on tqpair=0xa31540 00:26:00.275 [2024-07-26 14:20:17.057775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.275 [2024-07-26 14:20:17.057785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91540) on tqpair=0xa31540 00:26:00.275 [2024-07-26 14:20:17.057794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.275 [2024-07-26 14:20:17.057803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa916c0) on tqpair=0xa31540 00:26:00.275 [2024-07-26 14:20:17.057811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.275 [2024-07-26 14:20:17.057819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91840) on tqpair=0xa31540 00:26:00.275 [2024-07-26 14:20:17.057828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.275 [2024-07-26 14:20:17.057848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.057858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.057865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa31540) 00:26:00.275 [2024-07-26 14:20:17.057877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.275 [2024-07-26 14:20:17.057905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91840, cid 3, qid 0 00:26:00.275 [2024-07-26 14:20:17.058079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.275 [2024-07-26 14:20:17.058096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.275 [2024-07-26 14:20:17.058103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91840) on tqpair=0xa31540 00:26:00.275 [2024-07-26 14:20:17.058124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa31540) 00:26:00.275 [2024-07-26 14:20:17.058150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.275 [2024-07-26 14:20:17.058180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91840, cid 3, qid 0 00:26:00.275 [2024-07-26 14:20:17.058328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.275 [2024-07-26 14:20:17.058344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.275 [2024-07-26 14:20:17.058352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91840) on tqpair=0xa31540 00:26:00.275 [2024-07-26 14:20:17.058368] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:00.275 [2024-07-26 14:20:17.058382] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:00.275 [2024-07-26 14:20:17.058400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa31540) 00:26:00.275 [2024-07-26 14:20:17.058436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.275 [2024-07-26 14:20:17.058463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91840, cid 3, qid 0 00:26:00.275 [2024-07-26 14:20:17.058635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.275 [2024-07-26 14:20:17.058651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.275 [2024-07-26 14:20:17.058658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91840) on tqpair=0xa31540 00:26:00.275 [2024-07-26 14:20:17.058685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa31540) 00:26:00.275 [2024-07-26 14:20:17.058713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.275 [2024-07-26 14:20:17.058737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91840, cid 3, qid 0 00:26:00.275 [2024-07-26 14:20:17.058909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.275 [2024-07-26 14:20:17.058926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.275 [2024-07-26 14:20:17.058933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91840) on tqpair=0xa31540 00:26:00.275 [2024-07-26 14:20:17.058958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.058975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa31540) 00:26:00.275 [2024-07-26 14:20:17.058987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.275 [2024-07-26 14:20:17.059010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91840, cid 3, qid 0 00:26:00.275 [2024-07-26 14:20:17.059147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.275 [2024-07-26 14:20:17.059163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.275 [2024-07-26 14:20:17.059171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.275 [2024-07-26 14:20:17.059178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91840) on tqpair=0xa31540 00:26:00.276 [2024-07-26 14:20:17.059196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.276 [2024-07-26 14:20:17.059206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.276 [2024-07-26 14:20:17.059213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa31540) 00:26:00.276 [2024-07-26 14:20:17.059224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.276 [2024-07-26 14:20:17.059247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91840, cid 3, qid 0 00:26:00.276 [2024-07-26 14:20:17.059373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.276 [2024-07-26 14:20:17.059390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.276 [2024-07-26 14:20:17.059397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.276 [2024-07-26 14:20:17.059409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91840) on tqpair=0xa31540 00:26:00.276 [2024-07-26 14:20:17.063436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.276 [2024-07-26 14:20:17.063453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.276 [2024-07-26 14:20:17.063461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa31540) 00:26:00.276 [2024-07-26 14:20:17.063473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.276 [2024-07-26 14:20:17.063499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa91840, cid 3, qid 0 00:26:00.276 [2024-07-26 14:20:17.063710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.276 [2024-07-26 14:20:17.063726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.276 [2024-07-26 14:20:17.063734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.276 [2024-07-26 14:20:17.063741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa91840) on tqpair=0xa31540 00:26:00.276 [2024-07-26 14:20:17.063756] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:26:00.276 00:26:00.276 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:00.276 [2024-07-26 14:20:17.117717] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:26:00.276 [2024-07-26 14:20:17.117815] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588166 ] 00:26:00.276 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.537 [2024-07-26 14:20:17.169825] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:00.537 [2024-07-26 14:20:17.169886] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:00.537 [2024-07-26 14:20:17.169898] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:00.537 [2024-07-26 14:20:17.169918] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:00.537 [2024-07-26 14:20:17.169933] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:00.537 [2024-07-26 14:20:17.173475] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:00.537 [2024-07-26 14:20:17.173521] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d89540 0 00:26:00.537 [2024-07-26 14:20:17.181439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:00.537 [2024-07-26 14:20:17.181466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:00.537 [2024-07-26 14:20:17.181482] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:00.537 [2024-07-26 14:20:17.181489] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:00.538 [2024-07-26 14:20:17.181537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.181549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.181557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.538 [2024-07-26 14:20:17.181575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:00.538 [2024-07-26 14:20:17.181604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.538 [2024-07-26 14:20:17.189445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.538 [2024-07-26 14:20:17.189464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.538 [2024-07-26 14:20:17.189472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.189480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.538 [2024-07-26 14:20:17.189501] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:00.538 [2024-07-26 14:20:17.189514] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:00.538 [2024-07-26 14:20:17.189525] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:00.538 [2024-07-26 14:20:17.189546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.189555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.189563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.538 [2024-07-26 14:20:17.189575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.538 [2024-07-26 14:20:17.189602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.538 [2024-07-26 14:20:17.189809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.538 [2024-07-26 14:20:17.189826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.538 [2024-07-26 14:20:17.189834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.189841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.538 [2024-07-26 14:20:17.189855] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:00.538 [2024-07-26 14:20:17.189871] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:00.538 [2024-07-26 14:20:17.189885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.189893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.189901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.538 [2024-07-26 14:20:17.189912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.538 [2024-07-26 14:20:17.189937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.538 [2024-07-26 14:20:17.190124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.538 [2024-07-26 14:20:17.190141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.538 [2024-07-26 14:20:17.190148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.538 [2024-07-26 14:20:17.190165] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:00.538 [2024-07-26 14:20:17.190180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:00.538 [2024-07-26 14:20:17.190194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.538 [2024-07-26 14:20:17.190221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.538 [2024-07-26 14:20:17.190245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.538 [2024-07-26 14:20:17.190459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.538 [2024-07-26 14:20:17.190480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.538 [2024-07-26 14:20:17.190489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.538 [2024-07-26 14:20:17.190506] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:00.538 [2024-07-26 14:20:17.190526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.538 [2024-07-26 14:20:17.190555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.538 [2024-07-26 14:20:17.190580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.538 [2024-07-26 14:20:17.190724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.538 [2024-07-26 14:20:17.190741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.538 [2024-07-26 14:20:17.190748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.538 [2024-07-26 14:20:17.190764] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:00.538 [2024-07-26 14:20:17.190774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:00.538 [2024-07-26 14:20:17.190789] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:00.538 [2024-07-26 14:20:17.190900] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:00.538 [2024-07-26 14:20:17.190908] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:00.538 [2024-07-26 14:20:17.190923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.190938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.538 [2024-07-26 14:20:17.190950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.538 [2024-07-26 14:20:17.190974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.538 [2024-07-26 14:20:17.191187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.538 [2024-07-26 14:20:17.191204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.538 [2024-07-26 14:20:17.191211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.191219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.538 [2024-07-26 14:20:17.191227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:00.538 [2024-07-26 14:20:17.191246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.191256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.191263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.538 [2024-07-26 14:20:17.191275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.538 [2024-07-26 14:20:17.191299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.538 [2024-07-26 14:20:17.191537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.538 [2024-07-26 14:20:17.191557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.538 [2024-07-26 14:20:17.191566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.538 [2024-07-26 14:20:17.191573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.538 [2024-07-26 14:20:17.191581] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:00.538 [2024-07-26 14:20:17.191591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.191606] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:00.539 [2024-07-26 14:20:17.191621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.191638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.191647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.191659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.539 [2024-07-26 14:20:17.191683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.539 [2024-07-26 14:20:17.191933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.539 [2024-07-26 14:20:17.191949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.539 [2024-07-26 14:20:17.191957] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.191964] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d89540): datao=0, datal=4096, cccid=0 00:26:00.539 [2024-07-26 14:20:17.191973] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1de93c0) on tqpair(0x1d89540): expected_datao=0, payload_size=4096 00:26:00.539 [2024-07-26 14:20:17.191981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.191993] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192001] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.539 [2024-07-26 14:20:17.192051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.539 [2024-07-26 14:20:17.192059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.539 [2024-07-26 14:20:17.192078] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:00.539 [2024-07-26 14:20:17.192088] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:00.539 [2024-07-26 14:20:17.192096] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:00.539 [2024-07-26 14:20:17.192104] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:00.539 [2024-07-26 14:20:17.192113] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:00.539 [2024-07-26 14:20:17.192122] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.192138] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.192156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.192188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:00.539 [2024-07-26 14:20:17.192224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.539 [2024-07-26 14:20:17.192517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.539 [2024-07-26 14:20:17.192532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.539 [2024-07-26 14:20:17.192540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.539 [2024-07-26 14:20:17.192561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.192586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.539 [2024-07-26 14:20:17.192598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.192622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.539 [2024-07-26 14:20:17.192633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.192657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.539 [2024-07-26 14:20:17.192667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.192691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.539 [2024-07-26 14:20:17.192701] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.192733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.192747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.192755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.192767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.539 [2024-07-26 14:20:17.192793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de93c0, cid 0, qid 0 00:26:00.539 [2024-07-26 14:20:17.192806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9540, cid 1, qid 0 00:26:00.539 [2024-07-26 14:20:17.192815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de96c0, cid 2, qid 0 00:26:00.539 [2024-07-26 14:20:17.192823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9840, cid 3, qid 0 00:26:00.539 [2024-07-26 14:20:17.192832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de99c0, cid 4, qid 0 00:26:00.539 [2024-07-26 14:20:17.193119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.539 [2024-07-26 14:20:17.193140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.539 [2024-07-26 14:20:17.193148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.193156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de99c0) on tqpair=0x1d89540 00:26:00.539 [2024-07-26 14:20:17.193165] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:00.539 [2024-07-26 14:20:17.193175] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.193197] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.193210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.193223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.193231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.193238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.193250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:00.539 [2024-07-26 14:20:17.193274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de99c0, cid 4, qid 0 00:26:00.539 [2024-07-26 14:20:17.197443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.539 [2024-07-26 14:20:17.197462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.539 [2024-07-26 14:20:17.197470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.197478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de99c0) on tqpair=0x1d89540 00:26:00.539 [2024-07-26 14:20:17.197555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.197579] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:00.539 [2024-07-26 14:20:17.197597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.197605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d89540) 00:26:00.539 [2024-07-26 14:20:17.197618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.539 [2024-07-26 14:20:17.197643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de99c0, cid 4, qid 0 00:26:00.539 [2024-07-26 14:20:17.197827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.539 [2024-07-26 14:20:17.197843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.539 [2024-07-26 14:20:17.197851] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.197858] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d89540): datao=0, datal=4096, cccid=4 00:26:00.539 [2024-07-26 14:20:17.197866] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1de99c0) on tqpair(0x1d89540): expected_datao=0, payload_size=4096 00:26:00.539 [2024-07-26 14:20:17.197874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.197901] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.197911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.198043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.539 [2024-07-26 14:20:17.198059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.539 [2024-07-26 14:20:17.198067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.539 [2024-07-26 14:20:17.198074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de99c0) on tqpair=0x1d89540 00:26:00.539 [2024-07-26 14:20:17.198103] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:00.539 [2024-07-26 14:20:17.198123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.198144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.198159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.198168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.198179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.198204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de99c0, cid 4, qid 0 00:26:00.540 [2024-07-26 14:20:17.198491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.540 [2024-07-26 14:20:17.198507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.540 [2024-07-26 14:20:17.198514] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.198521] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d89540): datao=0, datal=4096, cccid=4 00:26:00.540 [2024-07-26 14:20:17.198530] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1de99c0) on tqpair(0x1d89540): expected_datao=0, payload_size=4096 00:26:00.540 [2024-07-26 14:20:17.198538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.198557] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.198567] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.198650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.540 [2024-07-26 14:20:17.198666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.540 [2024-07-26 14:20:17.198674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.198681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de99c0) on tqpair=0x1d89540 00:26:00.540 [2024-07-26 14:20:17.198708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.198729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.198745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.198754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.198766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.198791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de99c0, cid 4, qid 0 00:26:00.540 [2024-07-26 14:20:17.199000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.540 [2024-07-26 14:20:17.199013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.540 [2024-07-26 14:20:17.199020] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.199027] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d89540): datao=0, datal=4096, cccid=4 00:26:00.540 [2024-07-26 14:20:17.199036] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1de99c0) on tqpair(0x1d89540): expected_datao=0, payload_size=4096 00:26:00.540 [2024-07-26 14:20:17.199044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.199061] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.199071] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.239636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.540 [2024-07-26 14:20:17.239661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.540 [2024-07-26 14:20:17.239670] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.239678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de99c0) on tqpair=0x1d89540 00:26:00.540 [2024-07-26 14:20:17.239694] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.239712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.239729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.239745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.239755] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.239765] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.239775] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:00.540 [2024-07-26 14:20:17.239783] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:00.540 [2024-07-26 14:20:17.239793] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:00.540 [2024-07-26 14:20:17.239815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.239825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.239838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.239851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.239859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.239866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.239886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.540 [2024-07-26 14:20:17.239916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de99c0, cid 4, qid 0 00:26:00.540 [2024-07-26 14:20:17.239929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9b40, cid 5, qid 0 00:26:00.540 [2024-07-26 14:20:17.240166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.540 [2024-07-26 14:20:17.240183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.540 [2024-07-26 14:20:17.240190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.240198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de99c0) on tqpair=0x1d89540 00:26:00.540 [2024-07-26 14:20:17.240209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.540 [2024-07-26 14:20:17.240219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.540 [2024-07-26 14:20:17.240226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.240234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9b40) on tqpair=0x1d89540 00:26:00.540 [2024-07-26 14:20:17.240251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.240262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.240274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.240303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9b40, cid 5, qid 0 00:26:00.540 [2024-07-26 14:20:17.244443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.540 [2024-07-26 14:20:17.244461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.540 [2024-07-26 14:20:17.244469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.244476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9b40) on tqpair=0x1d89540 00:26:00.540 [2024-07-26 14:20:17.244495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.244505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.244517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.244543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9b40, cid 5, qid 0 00:26:00.540 [2024-07-26 14:20:17.244845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.540 [2024-07-26 14:20:17.244858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.540 [2024-07-26 14:20:17.244865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.244872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9b40) on tqpair=0x1d89540 00:26:00.540 [2024-07-26 14:20:17.244890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.244900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.244911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.244934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9b40, cid 5, qid 0 00:26:00.540 [2024-07-26 14:20:17.245132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.540 [2024-07-26 14:20:17.245145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.540 [2024-07-26 14:20:17.245152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.245160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9b40) on tqpair=0x1d89540 00:26:00.540 [2024-07-26 14:20:17.245186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.245198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.245210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.245224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.245232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.245243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.245256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.540 [2024-07-26 14:20:17.245264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d89540) 00:26:00.540 [2024-07-26 14:20:17.245274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.540 [2024-07-26 14:20:17.245288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d89540) 00:26:00.541 [2024-07-26 14:20:17.245307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.541 [2024-07-26 14:20:17.245346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9b40, cid 5, qid 0 00:26:00.541 [2024-07-26 14:20:17.245359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de99c0, cid 4, qid 0 00:26:00.541 [2024-07-26 14:20:17.245368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9cc0, cid 6, qid 0 00:26:00.541 [2024-07-26 14:20:17.245376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9e40, cid 7, qid 0 00:26:00.541 [2024-07-26 14:20:17.245815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.541 [2024-07-26 14:20:17.245833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.541 [2024-07-26 14:20:17.245841] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245848] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d89540): datao=0, datal=8192, cccid=5 00:26:00.541 [2024-07-26 14:20:17.245857] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1de9b40) on tqpair(0x1d89540): expected_datao=0, payload_size=8192 00:26:00.541 [2024-07-26 14:20:17.245865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245876] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245885] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.541 [2024-07-26 14:20:17.245904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.541 [2024-07-26 14:20:17.245911] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245918] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d89540): datao=0, datal=512, cccid=4 00:26:00.541 [2024-07-26 14:20:17.245926] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1de99c0) on tqpair(0x1d89540): expected_datao=0, payload_size=512 00:26:00.541 [2024-07-26 14:20:17.245934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245944] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245952] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.541 [2024-07-26 14:20:17.245971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.541 [2024-07-26 14:20:17.245978] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.245985] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d89540): datao=0, datal=512, cccid=6 00:26:00.541 [2024-07-26 14:20:17.245993] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1de9cc0) on tqpair(0x1d89540): expected_datao=0, payload_size=512 00:26:00.541 [2024-07-26 14:20:17.246001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246012] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246019] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:00.541 [2024-07-26 14:20:17.246038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:00.541 [2024-07-26 14:20:17.246045] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246052] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d89540): datao=0, datal=4096, cccid=7 00:26:00.541 [2024-07-26 14:20:17.246060] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1de9e40) on tqpair(0x1d89540): expected_datao=0, payload_size=4096 00:26:00.541 [2024-07-26 14:20:17.246068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246079] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246087] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.541 [2024-07-26 14:20:17.246112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.541 [2024-07-26 14:20:17.246123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9b40) on tqpair=0x1d89540 00:26:00.541 [2024-07-26 14:20:17.246151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.541 [2024-07-26 14:20:17.246163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.541 [2024-07-26 14:20:17.246170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de99c0) on tqpair=0x1d89540 00:26:00.541 [2024-07-26 14:20:17.246194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.541 [2024-07-26 14:20:17.246206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.541 [2024-07-26 14:20:17.246213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9cc0) on tqpair=0x1d89540 00:26:00.541 [2024-07-26 14:20:17.246232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.541 [2024-07-26 14:20:17.246242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.541 [2024-07-26 14:20:17.246249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.541 [2024-07-26 14:20:17.246256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9e40) on tqpair=0x1d89540 00:26:00.541 ===================================================== 00:26:00.541 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.541 ===================================================== 00:26:00.541 Controller Capabilities/Features 00:26:00.541 ================================ 00:26:00.541 Vendor ID: 8086 00:26:00.541 Subsystem Vendor ID: 8086 00:26:00.541 Serial Number: SPDK00000000000001 00:26:00.541 Model Number: SPDK bdev Controller 00:26:00.541 Firmware Version: 24.09 00:26:00.541 Recommended Arb Burst: 6 00:26:00.541 IEEE OUI Identifier: e4 d2 5c 00:26:00.541 Multi-path I/O 00:26:00.541 May have multiple subsystem ports: Yes 00:26:00.541 May have multiple controllers: Yes 00:26:00.541 Associated with SR-IOV VF: No 00:26:00.541 Max Data Transfer Size: 131072 00:26:00.541 Max Number of Namespaces: 32 00:26:00.541 Max Number of I/O Queues: 127 00:26:00.541 NVMe Specification Version (VS): 1.3 00:26:00.541 NVMe Specification Version (Identify): 1.3 00:26:00.541 Maximum Queue Entries: 128 00:26:00.541 Contiguous Queues Required: Yes 00:26:00.541 Arbitration Mechanisms Supported 00:26:00.541 Weighted Round Robin: Not Supported 00:26:00.541 Vendor Specific: Not Supported 00:26:00.541 Reset Timeout: 15000 ms 00:26:00.541 Doorbell Stride: 4 bytes 00:26:00.541 NVM Subsystem Reset: Not Supported 00:26:00.541 Command Sets Supported 00:26:00.541 NVM Command Set: Supported 00:26:00.541 Boot Partition: Not Supported 00:26:00.541 Memory Page Size Minimum: 4096 bytes 00:26:00.541 Memory Page Size Maximum: 4096 bytes 00:26:00.541 Persistent Memory Region: Not Supported 00:26:00.541 Optional Asynchronous Events Supported 00:26:00.541 Namespace Attribute Notices: Supported 00:26:00.541 Firmware Activation Notices: Not Supported 00:26:00.541 ANA Change Notices: Not Supported 00:26:00.541 PLE Aggregate Log Change Notices: Not Supported 00:26:00.541 LBA Status Info Alert Notices: Not Supported 00:26:00.541 EGE Aggregate Log Change Notices: Not Supported 00:26:00.541 Normal NVM Subsystem Shutdown event: Not Supported 00:26:00.541 Zone Descriptor Change Notices: Not Supported 00:26:00.541 Discovery Log Change Notices: Not Supported 00:26:00.541 Controller Attributes 00:26:00.541 128-bit Host Identifier: Supported 00:26:00.541 Non-Operational Permissive Mode: Not Supported 00:26:00.541 NVM Sets: Not Supported 00:26:00.541 Read Recovery Levels: Not Supported 00:26:00.541 Endurance Groups: Not Supported 00:26:00.541 Predictable Latency Mode: Not Supported 00:26:00.541 Traffic Based Keep ALive: Not Supported 00:26:00.541 Namespace Granularity: Not Supported 00:26:00.541 SQ Associations: Not Supported 00:26:00.541 UUID List: Not Supported 00:26:00.541 Multi-Domain Subsystem: Not Supported 00:26:00.541 Fixed Capacity Management: Not Supported 00:26:00.541 Variable Capacity Management: Not Supported 00:26:00.541 Delete Endurance Group: Not Supported 00:26:00.541 Delete NVM Set: Not Supported 00:26:00.541 Extended LBA Formats Supported: Not Supported 00:26:00.541 Flexible Data Placement Supported: Not Supported 00:26:00.541 00:26:00.541 Controller Memory Buffer Support 00:26:00.541 ================================ 00:26:00.541 Supported: No 00:26:00.541 00:26:00.541 Persistent Memory Region Support 00:26:00.541 ================================ 00:26:00.541 Supported: No 00:26:00.541 00:26:00.541 Admin Command Set Attributes 00:26:00.541 ============================ 00:26:00.541 Security Send/Receive: Not Supported 00:26:00.541 Format NVM: Not Supported 00:26:00.541 Firmware Activate/Download: Not Supported 00:26:00.541 Namespace Management: Not Supported 00:26:00.542 Device Self-Test: Not Supported 00:26:00.542 Directives: Not Supported 00:26:00.542 NVMe-MI: Not Supported 00:26:00.542 Virtualization Management: Not Supported 00:26:00.542 Doorbell Buffer Config: Not Supported 00:26:00.542 Get LBA Status Capability: Not Supported 00:26:00.542 Command & Feature Lockdown Capability: Not Supported 00:26:00.542 Abort Command Limit: 4 00:26:00.542 Async Event Request Limit: 4 00:26:00.542 Number of Firmware Slots: N/A 00:26:00.542 Firmware Slot 1 Read-Only: N/A 00:26:00.542 Firmware Activation Without Reset: N/A 00:26:00.542 Multiple Update Detection Support: N/A 00:26:00.542 Firmware Update Granularity: No Information Provided 00:26:00.542 Per-Namespace SMART Log: No 00:26:00.542 Asymmetric Namespace Access Log Page: Not Supported 00:26:00.542 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:00.542 Command Effects Log Page: Supported 00:26:00.542 Get Log Page Extended Data: Supported 00:26:00.542 Telemetry Log Pages: Not Supported 00:26:00.542 Persistent Event Log Pages: Not Supported 00:26:00.542 Supported Log Pages Log Page: May Support 00:26:00.542 Commands Supported & Effects Log Page: Not Supported 00:26:00.542 Feature Identifiers & Effects Log Page:May Support 00:26:00.542 NVMe-MI Commands & Effects Log Page: May Support 00:26:00.542 Data Area 4 for Telemetry Log: Not Supported 00:26:00.542 Error Log Page Entries Supported: 128 00:26:00.542 Keep Alive: Supported 00:26:00.542 Keep Alive Granularity: 10000 ms 00:26:00.542 00:26:00.542 NVM Command Set Attributes 00:26:00.542 ========================== 00:26:00.542 Submission Queue Entry Size 00:26:00.542 Max: 64 00:26:00.542 Min: 64 00:26:00.542 Completion Queue Entry Size 00:26:00.542 Max: 16 00:26:00.542 Min: 16 00:26:00.542 Number of Namespaces: 32 00:26:00.542 Compare Command: Supported 00:26:00.542 Write Uncorrectable Command: Not Supported 00:26:00.542 Dataset Management Command: Supported 00:26:00.542 Write Zeroes Command: Supported 00:26:00.542 Set Features Save Field: Not Supported 00:26:00.542 Reservations: Supported 00:26:00.542 Timestamp: Not Supported 00:26:00.542 Copy: Supported 00:26:00.542 Volatile Write Cache: Present 00:26:00.542 Atomic Write Unit (Normal): 1 00:26:00.542 Atomic Write Unit (PFail): 1 00:26:00.542 Atomic Compare & Write Unit: 1 00:26:00.542 Fused Compare & Write: Supported 00:26:00.542 Scatter-Gather List 00:26:00.542 SGL Command Set: Supported 00:26:00.542 SGL Keyed: Supported 00:26:00.542 SGL Bit Bucket Descriptor: Not Supported 00:26:00.542 SGL Metadata Pointer: Not Supported 00:26:00.542 Oversized SGL: Not Supported 00:26:00.542 SGL Metadata Address: Not Supported 00:26:00.542 SGL Offset: Supported 00:26:00.542 Transport SGL Data Block: Not Supported 00:26:00.542 Replay Protected Memory Block: Not Supported 00:26:00.542 00:26:00.542 Firmware Slot Information 00:26:00.542 ========================= 00:26:00.542 Active slot: 1 00:26:00.542 Slot 1 Firmware Revision: 24.09 00:26:00.542 00:26:00.542 00:26:00.542 Commands Supported and Effects 00:26:00.542 ============================== 00:26:00.542 Admin Commands 00:26:00.542 -------------- 00:26:00.542 Get Log Page (02h): Supported 00:26:00.542 Identify (06h): Supported 00:26:00.542 Abort (08h): Supported 00:26:00.542 Set Features (09h): Supported 00:26:00.542 Get Features (0Ah): Supported 00:26:00.542 Asynchronous Event Request (0Ch): Supported 00:26:00.542 Keep Alive (18h): Supported 00:26:00.542 I/O Commands 00:26:00.542 ------------ 00:26:00.542 Flush (00h): Supported LBA-Change 00:26:00.542 Write (01h): Supported LBA-Change 00:26:00.542 Read (02h): Supported 00:26:00.542 Compare (05h): Supported 00:26:00.542 Write Zeroes (08h): Supported LBA-Change 00:26:00.542 Dataset Management (09h): Supported LBA-Change 00:26:00.542 Copy (19h): Supported LBA-Change 00:26:00.542 00:26:00.542 Error Log 00:26:00.542 ========= 00:26:00.542 00:26:00.542 Arbitration 00:26:00.542 =========== 00:26:00.542 Arbitration Burst: 1 00:26:00.542 00:26:00.542 Power Management 00:26:00.542 ================ 00:26:00.542 Number of Power States: 1 00:26:00.542 Current Power State: Power State #0 00:26:00.542 Power State #0: 00:26:00.542 Max Power: 0.00 W 00:26:00.542 Non-Operational State: Operational 00:26:00.542 Entry Latency: Not Reported 00:26:00.542 Exit Latency: Not Reported 00:26:00.542 Relative Read Throughput: 0 00:26:00.542 Relative Read Latency: 0 00:26:00.542 Relative Write Throughput: 0 00:26:00.542 Relative Write Latency: 0 00:26:00.542 Idle Power: Not Reported 00:26:00.542 Active Power: Not Reported 00:26:00.542 Non-Operational Permissive Mode: Not Supported 00:26:00.542 00:26:00.542 Health Information 00:26:00.542 ================== 00:26:00.542 Critical Warnings: 00:26:00.542 Available Spare Space: OK 00:26:00.542 Temperature: OK 00:26:00.542 Device Reliability: OK 00:26:00.542 Read Only: No 00:26:00.542 Volatile Memory Backup: OK 00:26:00.542 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:00.542 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:00.542 Available Spare: 0% 00:26:00.542 Available Spare Threshold: 0% 00:26:00.542 Life Percentage Used:[2024-07-26 14:20:17.246390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.542 [2024-07-26 14:20:17.246404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d89540) 00:26:00.542 [2024-07-26 14:20:17.246416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.542 [2024-07-26 14:20:17.246451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9e40, cid 7, qid 0 00:26:00.542 [2024-07-26 14:20:17.246699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.542 [2024-07-26 14:20:17.246712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.542 [2024-07-26 14:20:17.246719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.542 [2024-07-26 14:20:17.246726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9e40) on tqpair=0x1d89540 00:26:00.542 [2024-07-26 14:20:17.246776] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:00.542 [2024-07-26 14:20:17.246799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de93c0) on tqpair=0x1d89540 00:26:00.542 [2024-07-26 14:20:17.246810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.542 [2024-07-26 14:20:17.246820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9540) on tqpair=0x1d89540 00:26:00.542 [2024-07-26 14:20:17.246829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.542 [2024-07-26 14:20:17.246838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de96c0) on tqpair=0x1d89540 00:26:00.542 [2024-07-26 14:20:17.246846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.542 [2024-07-26 14:20:17.246855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9840) on tqpair=0x1d89540 00:26:00.542 [2024-07-26 14:20:17.246863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.543 [2024-07-26 14:20:17.246878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.246887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.246894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d89540) 00:26:00.543 [2024-07-26 14:20:17.246906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.543 [2024-07-26 14:20:17.246936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9840, cid 3, qid 0 00:26:00.543 [2024-07-26 14:20:17.247160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.543 [2024-07-26 14:20:17.247173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.543 [2024-07-26 14:20:17.247181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9840) on tqpair=0x1d89540 00:26:00.543 [2024-07-26 14:20:17.247201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d89540) 00:26:00.543 [2024-07-26 14:20:17.247227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.543 [2024-07-26 14:20:17.247256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9840, cid 3, qid 0 00:26:00.543 [2024-07-26 14:20:17.247469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.543 [2024-07-26 14:20:17.247486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.543 [2024-07-26 14:20:17.247493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9840) on tqpair=0x1d89540 00:26:00.543 [2024-07-26 14:20:17.247509] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:00.543 [2024-07-26 14:20:17.247519] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:00.543 [2024-07-26 14:20:17.247537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d89540) 00:26:00.543 [2024-07-26 14:20:17.247565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.543 [2024-07-26 14:20:17.247589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9840, cid 3, qid 0 00:26:00.543 [2024-07-26 14:20:17.247778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.543 [2024-07-26 14:20:17.247790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.543 [2024-07-26 14:20:17.247798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9840) on tqpair=0x1d89540 00:26:00.543 [2024-07-26 14:20:17.247823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.247840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d89540) 00:26:00.543 [2024-07-26 14:20:17.247852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.543 [2024-07-26 14:20:17.247874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9840, cid 3, qid 0 00:26:00.543 [2024-07-26 14:20:17.248055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.543 [2024-07-26 14:20:17.248071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.543 [2024-07-26 14:20:17.248079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.248086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9840) on tqpair=0x1d89540 00:26:00.543 [2024-07-26 14:20:17.248104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.248115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.248122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d89540) 00:26:00.543 [2024-07-26 14:20:17.248137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.543 [2024-07-26 14:20:17.248161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9840, cid 3, qid 0 00:26:00.543 [2024-07-26 14:20:17.248350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.543 [2024-07-26 14:20:17.248367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.543 [2024-07-26 14:20:17.248374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.248381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9840) on tqpair=0x1d89540 00:26:00.543 [2024-07-26 14:20:17.248400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.248410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.248417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d89540) 00:26:00.543 [2024-07-26 14:20:17.252437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.543 [2024-07-26 14:20:17.252470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1de9840, cid 3, qid 0 00:26:00.543 [2024-07-26 14:20:17.252712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:00.543 [2024-07-26 14:20:17.252728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:00.543 [2024-07-26 14:20:17.252736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:00.543 [2024-07-26 14:20:17.252743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1de9840) on tqpair=0x1d89540 00:26:00.543 [2024-07-26 14:20:17.252758] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:26:00.543 0% 00:26:00.543 Data Units Read: 0 00:26:00.543 Data Units Written: 0 00:26:00.543 Host Read Commands: 0 00:26:00.543 Host Write Commands: 0 00:26:00.543 Controller Busy Time: 0 minutes 00:26:00.543 Power Cycles: 0 00:26:00.543 Power On Hours: 0 hours 00:26:00.543 Unsafe Shutdowns: 0 00:26:00.543 Unrecoverable Media Errors: 0 00:26:00.543 Lifetime Error Log Entries: 0 00:26:00.543 Warning Temperature Time: 0 minutes 00:26:00.543 Critical Temperature Time: 0 minutes 00:26:00.543 00:26:00.543 Number of Queues 00:26:00.543 ================ 00:26:00.543 Number of I/O Submission Queues: 127 00:26:00.543 Number of I/O Completion Queues: 127 00:26:00.543 00:26:00.543 Active Namespaces 00:26:00.543 ================= 00:26:00.543 Namespace ID:1 00:26:00.543 Error Recovery Timeout: Unlimited 00:26:00.543 Command Set Identifier: NVM (00h) 00:26:00.543 Deallocate: Supported 00:26:00.543 Deallocated/Unwritten Error: Not Supported 00:26:00.543 Deallocated Read Value: Unknown 00:26:00.543 Deallocate in Write Zeroes: Not Supported 00:26:00.543 Deallocated Guard Field: 0xFFFF 00:26:00.543 Flush: Supported 00:26:00.543 Reservation: Supported 00:26:00.543 Namespace Sharing Capabilities: Multiple Controllers 00:26:00.543 Size (in LBAs): 131072 (0GiB) 00:26:00.543 Capacity (in LBAs): 131072 (0GiB) 00:26:00.543 Utilization (in LBAs): 131072 (0GiB) 00:26:00.543 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:00.543 EUI64: ABCDEF0123456789 00:26:00.543 UUID: 52935f02-0c7e-465d-88fa-c30c9c808001 00:26:00.543 Thin Provisioning: Not Supported 00:26:00.543 Per-NS Atomic Units: Yes 00:26:00.543 Atomic Boundary Size (Normal): 0 00:26:00.543 Atomic Boundary Size (PFail): 0 00:26:00.543 Atomic Boundary Offset: 0 00:26:00.543 Maximum Single Source Range Length: 65535 00:26:00.543 Maximum Copy Length: 65535 00:26:00.543 Maximum Source Range Count: 1 00:26:00.543 NGUID/EUI64 Never Reused: No 00:26:00.543 Namespace Write Protected: No 00:26:00.543 Number of LBA Formats: 1 00:26:00.543 Current LBA Format: LBA Format #00 00:26:00.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:00.543 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:00.543 rmmod nvme_tcp 00:26:00.543 rmmod nvme_fabrics 00:26:00.543 rmmod nvme_keyring 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2588010 ']' 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2588010 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2588010 ']' 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2588010 00:26:00.543 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:26:00.544 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:00.544 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2588010 00:26:00.544 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:00.544 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:00.544 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2588010' 00:26:00.544 killing process with pid 2588010 00:26:00.544 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2588010 00:26:00.544 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2588010 00:26:01.111 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:01.111 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:01.111 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:01.111 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:01.111 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:01.111 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.111 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.111 14:20:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:03.055 00:26:03.055 real 0m6.260s 00:26:03.055 user 0m5.159s 00:26:03.055 sys 0m2.533s 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:03.055 ************************************ 00:26:03.055 END TEST nvmf_identify 00:26:03.055 ************************************ 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.055 ************************************ 00:26:03.055 START TEST nvmf_perf 00:26:03.055 ************************************ 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:03.055 * Looking for test storage... 00:26:03.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.055 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.313 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:03.313 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:03.313 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:03.314 14:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:05.845 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:05.845 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:05.845 Found net devices under 0000:84:00.0: cvl_0_0 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:05.845 Found net devices under 0000:84:00.1: cvl_0_1 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:05.845 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:05.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:26:05.846 00:26:05.846 --- 10.0.0.2 ping statistics --- 00:26:05.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.846 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:05.846 00:26:05.846 --- 10.0.0.1 ping statistics --- 00:26:05.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.846 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:05.846 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2590236 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2590236 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2590236 ']' 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.104 14:20:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:06.104 [2024-07-26 14:20:22.810184] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:26:06.104 [2024-07-26 14:20:22.810297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.104 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.104 [2024-07-26 14:20:22.896258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:06.363 [2024-07-26 14:20:23.023709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.363 [2024-07-26 14:20:23.023767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.363 [2024-07-26 14:20:23.023785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.363 [2024-07-26 14:20:23.023799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.363 [2024-07-26 14:20:23.023811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.363 [2024-07-26 14:20:23.023911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.363 [2024-07-26 14:20:23.023968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.363 [2024-07-26 14:20:23.024020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.363 [2024-07-26 14:20:23.024023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.363 14:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.363 14:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:26:06.363 14:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:06.363 14:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:06.363 14:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:06.363 14:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.363 14:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:06.363 14:20:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:09.642 14:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:09.642 14:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:09.900 14:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:26:09.900 14:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:10.464 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:10.464 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:26:10.464 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:10.464 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:10.464 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:10.721 [2024-07-26 14:20:27.457774] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.721 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:10.977 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:10.977 14:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:11.233 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:11.233 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:11.797 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.054 [2024-07-26 14:20:28.819876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.054 14:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:12.618 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:26:12.618 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:26:12.618 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:12.618 14:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:26:14.003 Initializing NVMe Controllers 00:26:14.003 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:26:14.003 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:26:14.003 Initialization complete. Launching workers. 00:26:14.003 ======================================================== 00:26:14.003 Latency(us) 00:26:14.003 Device Information : IOPS MiB/s Average min max 00:26:14.003 PCIE (0000:82:00.0) NSID 1 from core 0: 75361.67 294.38 424.04 44.90 5283.92 00:26:14.003 ======================================================== 00:26:14.003 Total : 75361.67 294.38 424.04 44.90 5283.92 00:26:14.003 00:26:14.003 14:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:14.003 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.933 Initializing NVMe Controllers 00:26:14.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:14.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:14.933 Initialization complete. Launching workers. 00:26:14.933 ======================================================== 00:26:14.933 Latency(us) 00:26:14.933 Device Information : IOPS MiB/s Average min max 00:26:14.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 101.00 0.39 10024.33 205.48 45290.34 00:26:14.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16466.74 7576.24 47886.99 00:26:14.933 ======================================================== 00:26:14.933 Total : 162.00 0.63 12450.18 205.48 47886.99 00:26:14.933 00:26:15.190 14:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:15.190 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.561 Initializing NVMe Controllers 00:26:16.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:16.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:16.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:16.561 Initialization complete. Launching workers. 00:26:16.561 ======================================================== 00:26:16.561 Latency(us) 00:26:16.561 Device Information : IOPS MiB/s Average min max 00:26:16.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7614.56 29.74 4219.37 627.03 8967.20 00:26:16.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3854.78 15.06 8336.48 5017.92 15681.80 00:26:16.561 ======================================================== 00:26:16.561 Total : 11469.33 44.80 5603.11 627.03 15681.80 00:26:16.561 00:26:16.561 14:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:16.561 14:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:16.561 14:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:16.561 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.089 Initializing NVMe Controllers 00:26:19.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:19.089 Controller IO queue size 128, less than required. 00:26:19.089 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:19.089 Controller IO queue size 128, less than required. 00:26:19.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:19.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:19.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:19.090 Initialization complete. Launching workers. 00:26:19.090 ======================================================== 00:26:19.090 Latency(us) 00:26:19.090 Device Information : IOPS MiB/s Average min max 00:26:19.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1042.93 260.73 125206.13 73415.55 166818.94 00:26:19.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.96 144.24 232317.47 102265.48 318693.73 00:26:19.090 ======================================================== 00:26:19.090 Total : 1619.89 404.97 163356.28 73415.55 318693.73 00:26:19.090 00:26:19.090 14:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:19.090 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.090 No valid NVMe controllers or AIO or URING devices found 00:26:19.090 Initializing NVMe Controllers 00:26:19.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:19.090 Controller IO queue size 128, less than required. 00:26:19.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:19.090 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:19.090 Controller IO queue size 128, less than required. 00:26:19.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:19.090 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:19.090 WARNING: Some requested NVMe devices were skipped 00:26:19.090 14:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:19.090 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.618 Initializing NVMe Controllers 00:26:21.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:21.618 Controller IO queue size 128, less than required. 00:26:21.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:21.618 Controller IO queue size 128, less than required. 00:26:21.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:21.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:21.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:21.618 Initialization complete. Launching workers. 00:26:21.618 00:26:21.618 ==================== 00:26:21.618 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:21.618 TCP transport: 00:26:21.618 polls: 14517 00:26:21.618 idle_polls: 5646 00:26:21.618 sock_completions: 8871 00:26:21.618 nvme_completions: 4303 00:26:21.618 submitted_requests: 6466 00:26:21.618 queued_requests: 1 00:26:21.618 00:26:21.618 ==================== 00:26:21.618 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:21.618 TCP transport: 00:26:21.618 polls: 12860 00:26:21.618 idle_polls: 3218 00:26:21.618 sock_completions: 9642 00:26:21.618 nvme_completions: 4381 00:26:21.618 submitted_requests: 6524 00:26:21.618 queued_requests: 1 00:26:21.618 ======================================================== 00:26:21.618 Latency(us) 00:26:21.618 Device Information : IOPS MiB/s Average min max 00:26:21.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1075.47 268.87 122141.21 63970.21 166617.81 00:26:21.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1094.97 273.74 119077.61 56003.48 168336.94 00:26:21.618 ======================================================== 00:26:21.618 Total : 2170.43 542.61 120595.65 56003.48 168336.94 00:26:21.618 00:26:21.618 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:21.618 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:22.216 rmmod nvme_tcp 00:26:22.216 rmmod nvme_fabrics 00:26:22.216 rmmod nvme_keyring 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2590236 ']' 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2590236 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2590236 ']' 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2590236 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2590236 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2590236' 00:26:22.216 killing process with pid 2590236 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2590236 00:26:22.216 14:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2590236 00:26:24.117 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.117 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:24.117 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:24.117 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.117 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.117 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.117 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.117 14:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:26.021 00:26:26.021 real 0m22.833s 00:26:26.021 user 1m10.246s 00:26:26.021 sys 0m6.047s 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:26.021 ************************************ 00:26:26.021 END TEST nvmf_perf 00:26:26.021 ************************************ 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.021 ************************************ 00:26:26.021 START TEST nvmf_fio_host 00:26:26.021 ************************************ 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:26.021 * Looking for test storage... 00:26:26.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.021 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.022 14:20:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:29.313 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:29.313 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:29.313 Found net devices under 0000:84:00.0: cvl_0_0 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.313 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:29.314 Found net devices under 0000:84:00.1: cvl_0_1 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:29.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:26:29.314 00:26:29.314 --- 10.0.0.2 ping statistics --- 00:26:29.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.314 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:26:29.314 00:26:29.314 --- 10.0.0.1 ping statistics --- 00:26:29.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.314 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2594343 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2594343 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2594343 ']' 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:29.314 14:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.314 [2024-07-26 14:20:45.776074] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:26:29.314 [2024-07-26 14:20:45.776160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.314 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.314 [2024-07-26 14:20:45.847721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.314 [2024-07-26 14:20:45.970249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.314 [2024-07-26 14:20:45.970311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.314 [2024-07-26 14:20:45.970336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.314 [2024-07-26 14:20:45.970349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.314 [2024-07-26 14:20:45.970361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.314 [2024-07-26 14:20:45.970512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.314 [2024-07-26 14:20:45.970552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.314 [2024-07-26 14:20:45.970605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.314 [2024-07-26 14:20:45.970608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.247 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:30.247 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:26:30.247 14:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:30.505 [2024-07-26 14:20:47.135444] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.505 14:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:30.505 14:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.505 14:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.505 14:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:31.070 Malloc1 00:26:31.070 14:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:31.328 14:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:31.892 14:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.150 [2024-07-26 14:20:48.839119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.150 14:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:32.408 14:20:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:32.666 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:32.666 fio-3.35 00:26:32.666 Starting 1 thread 00:26:32.666 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.194 00:26:35.194 test: (groupid=0, jobs=1): err= 0: pid=2594845: Fri Jul 26 14:20:51 2024 00:26:35.194 read: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(63.8MiB/2007msec) 00:26:35.194 slat (usec): min=2, max=157, avg= 3.04, stdev= 2.19 00:26:35.194 clat (usec): min=2699, max=14729, avg=8690.07, stdev=642.34 00:26:35.194 lat (usec): min=2723, max=14733, avg=8693.11, stdev=642.21 00:26:35.194 clat percentiles (usec): 00:26:35.194 | 1.00th=[ 7242], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8225], 00:26:35.194 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:26:35.194 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:26:35.194 | 99.00th=[10028], 99.50th=[10290], 99.90th=[12911], 99.95th=[14484], 00:26:35.194 | 99.99th=[14615] 00:26:35.194 bw ( KiB/s): min=31688, max=33176, per=99.91%, avg=32544.00, stdev=647.46, samples=4 00:26:35.194 iops : min= 7922, max= 8294, avg=8136.00, stdev=161.86, samples=4 00:26:35.194 write: IOPS=8140, BW=31.8MiB/s (33.3MB/s)(63.8MiB/2007msec); 0 zone resets 00:26:35.194 slat (usec): min=2, max=134, avg= 3.05, stdev= 1.79 00:26:35.194 clat (usec): min=1382, max=13995, avg=6984.36, stdev=572.73 00:26:35.194 lat (usec): min=1391, max=13998, avg=6987.41, stdev=572.65 00:26:35.194 clat percentiles (usec): 00:26:35.194 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6587], 00:26:35.194 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:26:35.194 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7767], 00:26:35.194 | 99.00th=[ 8160], 99.50th=[ 8356], 99.90th=[11863], 99.95th=[12911], 00:26:35.194 | 99.99th=[13960] 00:26:35.194 bw ( KiB/s): min=32368, max=32784, per=100.00%, avg=32564.00, stdev=173.50, samples=4 00:26:35.194 iops : min= 8092, max= 8196, avg=8141.00, stdev=43.37, samples=4 00:26:35.194 lat (msec) : 2=0.01%, 4=0.11%, 10=99.17%, 20=0.71% 00:26:35.194 cpu : usr=67.10%, sys=29.71%, ctx=59, majf=0, minf=39 00:26:35.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:35.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:35.194 issued rwts: total=16344,16337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:35.194 00:26:35.194 Run status group 0 (all jobs): 00:26:35.194 READ: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=63.8MiB (66.9MB), run=2007-2007msec 00:26:35.194 WRITE: bw=31.8MiB/s (33.3MB/s), 31.8MiB/s-31.8MiB/s (33.3MB/s-33.3MB/s), io=63.8MiB (66.9MB), run=2007-2007msec 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:35.194 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:35.195 14:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:35.195 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:35.195 fio-3.35 00:26:35.195 Starting 1 thread 00:26:35.195 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.739 00:26:37.740 test: (groupid=0, jobs=1): err= 0: pid=2595181: Fri Jul 26 14:20:54 2024 00:26:37.740 read: IOPS=5422, BW=84.7MiB/s (88.8MB/s)(170MiB/2008msec) 00:26:37.740 slat (usec): min=4, max=145, avg= 6.97, stdev= 3.74 00:26:37.740 clat (usec): min=3020, max=54472, avg=13872.44, stdev=6209.37 00:26:37.740 lat (usec): min=3025, max=54477, avg=13879.41, stdev=6210.90 00:26:37.740 clat percentiles (usec): 00:26:37.740 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 9241], 00:26:37.740 | 30.00th=[10159], 40.00th=[11338], 50.00th=[12256], 60.00th=[13435], 00:26:37.740 | 70.00th=[14746], 80.00th=[17957], 90.00th=[21890], 95.00th=[26608], 00:26:37.740 | 99.00th=[30540], 99.50th=[49021], 99.90th=[53216], 99.95th=[54264], 00:26:37.740 | 99.99th=[54264] 00:26:37.740 bw ( KiB/s): min=35680, max=67200, per=53.14%, avg=46104.00, stdev=14305.29, samples=4 00:26:37.740 iops : min= 2230, max= 4200, avg=2881.50, stdev=893.96, samples=4 00:26:37.740 write: IOPS=3256, BW=50.9MiB/s (53.4MB/s)(94.6MiB/1860msec); 0 zone resets 00:26:37.740 slat (usec): min=40, max=376, avg=59.17, stdev=22.11 00:26:37.740 clat (usec): min=5265, max=56792, avg=16580.01, stdev=6315.85 00:26:37.740 lat (usec): min=5318, max=56840, avg=16639.17, stdev=6327.54 00:26:37.740 clat percentiles (usec): 00:26:37.740 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[11076], 20.00th=[11994], 00:26:37.740 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14353], 60.00th=[15533], 00:26:37.740 | 70.00th=[18220], 80.00th=[21103], 90.00th=[24773], 95.00th=[28181], 00:26:37.740 | 99.00th=[32637], 99.50th=[52167], 99.90th=[54264], 99.95th=[54789], 00:26:37.740 | 99.99th=[56886] 00:26:37.740 bw ( KiB/s): min=35904, max=70368, per=92.09%, avg=47984.00, stdev=15281.23, samples=4 00:26:37.740 iops : min= 2244, max= 4398, avg=2999.00, stdev=955.08, samples=4 00:26:37.740 lat (msec) : 4=0.07%, 10=18.56%, 20=63.88%, 50=17.02%, 100=0.47% 00:26:37.740 cpu : usr=81.66%, sys=16.79%, ctx=12, majf=0, minf=65 00:26:37.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:37.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:37.740 issued rwts: total=10888,6057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:37.740 00:26:37.740 Run status group 0 (all jobs): 00:26:37.740 READ: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=170MiB (178MB), run=2008-2008msec 00:26:37.740 WRITE: bw=50.9MiB/s (53.4MB/s), 50.9MiB/s-50.9MiB/s (53.4MB/s-53.4MB/s), io=94.6MiB (99.2MB), run=1860-1860msec 00:26:37.740 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:37.997 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:37.997 rmmod nvme_tcp 00:26:37.997 rmmod nvme_fabrics 00:26:38.255 rmmod nvme_keyring 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2594343 ']' 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2594343 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2594343 ']' 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2594343 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2594343 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2594343' 00:26:38.255 killing process with pid 2594343 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2594343 00:26:38.255 14:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2594343 00:26:38.513 14:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.513 14:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.513 14:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.513 14:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.513 14:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.513 14:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.513 14:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.513 14:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:41.049 00:26:41.049 real 0m14.615s 00:26:41.049 user 0m44.309s 00:26:41.049 sys 0m4.605s 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.049 ************************************ 00:26:41.049 END TEST nvmf_fio_host 00:26:41.049 ************************************ 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.049 ************************************ 00:26:41.049 START TEST nvmf_failover 00:26:41.049 ************************************ 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:41.049 * Looking for test storage... 00:26:41.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.049 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.050 14:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:43.588 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:43.588 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:43.588 Found net devices under 0000:84:00.0: cvl_0_0 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.588 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:43.588 Found net devices under 0000:84:00.1: cvl_0_1 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:43.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:26:43.589 00:26:43.589 --- 10.0.0.2 ping statistics --- 00:26:43.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.589 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:26:43.589 00:26:43.589 --- 10.0.0.1 ping statistics --- 00:26:43.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.589 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2597553 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2597553 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2597553 ']' 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:43.589 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:43.848 [2024-07-26 14:21:00.482415] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:26:43.848 [2024-07-26 14:21:00.482542] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.848 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.848 [2024-07-26 14:21:00.572187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.848 [2024-07-26 14:21:00.716696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.848 [2024-07-26 14:21:00.716774] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.848 [2024-07-26 14:21:00.716794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.848 [2024-07-26 14:21:00.716811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.848 [2024-07-26 14:21:00.716826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.848 [2024-07-26 14:21:00.716928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.848 [2024-07-26 14:21:00.716992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.848 [2024-07-26 14:21:00.716997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.106 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.106 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:26:44.106 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:44.106 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:44.106 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:44.106 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.106 14:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:44.364 [2024-07-26 14:21:01.210627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.622 14:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:44.880 Malloc0 00:26:44.880 14:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.138 14:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:45.704 14:21:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.704 [2024-07-26 14:21:02.568963] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.961 14:21:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:46.218 [2024-07-26 14:21:02.938163] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:46.218 14:21:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:46.783 [2024-07-26 14:21:03.528438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2598042 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2598042 /var/tmp/bdevperf.sock 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2598042 ']' 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:46.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:46.783 14:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:47.715 14:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:47.715 14:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:26:47.715 14:21:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:48.280 NVMe0n1 00:26:48.280 14:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:48.845 00:26:48.845 14:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2598306 00:26:48.845 14:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:48.845 14:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:49.778 14:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.344 [2024-07-26 14:21:07.226004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.344 [2024-07-26 14:21:07.226375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.226976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.345 [2024-07-26 14:21:07.227460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.346 [2024-07-26 14:21:07.227834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1073420 is same with the state(5) to be set 00:26:50.603 14:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:53.880 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.880 00:26:53.880 14:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:54.446 14:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:57.727 14:21:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.727 [2024-07-26 14:21:14.393726] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.727 14:21:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:58.661 14:21:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:58.919 [2024-07-26 14:21:15.789728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122dfc0 is same with the state(5) to be set 00:26:58.919 [2024-07-26 14:21:15.789801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122dfc0 is same with the state(5) to be set 00:26:58.919 [2024-07-26 14:21:15.789817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122dfc0 is same with the state(5) to be set 00:26:58.919 [2024-07-26 14:21:15.789831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122dfc0 is same with the state(5) to be set 00:26:58.919 [2024-07-26 14:21:15.789844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122dfc0 is same with the state(5) to be set 00:26:58.919 [2024-07-26 14:21:15.789858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122dfc0 is same with the state(5) to be set 00:26:58.919 [2024-07-26 14:21:15.789872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122dfc0 is same with the state(5) to be set 00:26:59.177 14:21:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2598306 00:27:04.482 0 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2598042 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2598042 ']' 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2598042 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2598042 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2598042' 00:27:04.483 killing process with pid 2598042 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2598042 00:27:04.483 14:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2598042 00:27:04.483 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:04.483 [2024-07-26 14:21:03.601212] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:27:04.483 [2024-07-26 14:21:03.601295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598042 ] 00:27:04.483 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.483 [2024-07-26 14:21:03.665236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.483 [2024-07-26 14:21:03.787244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.483 Running I/O for 15 seconds... 00:27:04.483 [2024-07-26 14:21:07.228633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.228968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.228985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.483 [2024-07-26 14:21:07.229726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.483 [2024-07-26 14:21:07.229741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.229758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.229774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.229792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.229807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.229824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.229839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.229855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.229871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.229892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.229908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.229925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.229940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.229957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.229972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.229988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.484 [2024-07-26 14:21:07.230793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.484 [2024-07-26 14:21:07.230826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.484 [2024-07-26 14:21:07.230858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.484 [2024-07-26 14:21:07.230890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.484 [2024-07-26 14:21:07.230922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.484 [2024-07-26 14:21:07.230954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.230971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.484 [2024-07-26 14:21:07.230987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.231004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.484 [2024-07-26 14:21:07.231019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.484 [2024-07-26 14:21:07.231036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.231982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.231998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.485 [2024-07-26 14:21:07.232272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.485 [2024-07-26 14:21:07.232286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.486 [2024-07-26 14:21:07.232321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91880 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91888 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91896 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91904 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91912 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91920 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91928 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91936 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91944 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91952 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91960 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.232956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.232967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.232980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91968 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.232994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.233020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.233032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91976 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.233046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.233072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.233084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91984 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.233098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.233125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.233137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91992 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.233151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.486 [2024-07-26 14:21:07.233177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.486 [2024-07-26 14:21:07.233193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92000 len:8 PRP1 0x0 PRP2 0x0 00:27:04.486 [2024-07-26 14:21:07.233208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233279] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1707ba0 was disconnected and freed. reset controller. 00:27:04.486 [2024-07-26 14:21:07.233301] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:04.486 [2024-07-26 14:21:07.233341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.486 [2024-07-26 14:21:07.233361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.486 [2024-07-26 14:21:07.233393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.486 [2024-07-26 14:21:07.233425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.486 [2024-07-26 14:21:07.233463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:07.233477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.486 [2024-07-26 14:21:07.233553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e1790 (9): Bad file descriptor 00:27:04.486 [2024-07-26 14:21:07.237142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.486 [2024-07-26 14:21:07.271859] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:04.486 [2024-07-26 14:21:11.056338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.486 [2024-07-26 14:21:11.056411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:11.056454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.486 [2024-07-26 14:21:11.056474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:11.056495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.486 [2024-07-26 14:21:11.056511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:11.056528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.486 [2024-07-26 14:21:11.056543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.486 [2024-07-26 14:21:11.056561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.056972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.056988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.487 [2024-07-26 14:21:11.057580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.487 [2024-07-26 14:21:11.057595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.057627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.057659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.057690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.057722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.057754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.057785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.488 [2024-07-26 14:21:11.057817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.488 [2024-07-26 14:21:11.057876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.488 [2024-07-26 14:21:11.057923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.488 [2024-07-26 14:21:11.057956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.057973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.488 [2024-07-26 14:21:11.057988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.488 [2024-07-26 14:21:11.058020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.488 [2024-07-26 14:21:11.058051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.488 [2024-07-26 14:21:11.058914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.488 [2024-07-26 14:21:11.058930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.058945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.058962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.058977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.058993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.489 [2024-07-26 14:21:11.059870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.059967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.059984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.060020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.060052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.060084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.060116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.060148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.060180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.060212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.489 [2024-07-26 14:21:11.060245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.489 [2024-07-26 14:21:11.060260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:11.060421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:11.060651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.490 [2024-07-26 14:21:11.060707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.490 [2024-07-26 14:21:11.060721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81176 len:8 PRP1 0x0 PRP2 0x0 00:27:04.490 [2024-07-26 14:21:11.060735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060808] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1707d80 was disconnected and freed. reset controller. 00:27:04.490 [2024-07-26 14:21:11.060829] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:04.490 [2024-07-26 14:21:11.060870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.490 [2024-07-26 14:21:11.060891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.490 [2024-07-26 14:21:11.060924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.490 [2024-07-26 14:21:11.060970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.060985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.490 [2024-07-26 14:21:11.060998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:11.061013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.490 [2024-07-26 14:21:11.064614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.490 [2024-07-26 14:21:11.064658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e1790 (9): Bad file descriptor 00:27:04.490 [2024-07-26 14:21:11.234447] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:04.490 [2024-07-26 14:21:15.790247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.490 [2024-07-26 14:21:15.790827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:15.790858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:15.790890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.490 [2024-07-26 14:21:15.790922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.490 [2024-07-26 14:21:15.790939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.790954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.790973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.790989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.491 [2024-07-26 14:21:15.791581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.791980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.791996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.792013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.792028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.792047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.792063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.792080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.792094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.792111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.792126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.792143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.792158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.792174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.792189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.491 [2024-07-26 14:21:15.792206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.491 [2024-07-26 14:21:15.792220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.492 [2024-07-26 14:21:15.792615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.492 [2024-07-26 14:21:15.792647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.492 [2024-07-26 14:21:15.792684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.492 [2024-07-26 14:21:15.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.492 [2024-07-26 14:21:15.792747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.492 [2024-07-26 14:21:15.792780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.492 [2024-07-26 14:21:15.792812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.492 [2024-07-26 14:21:15.792846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.792975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.792992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.492 [2024-07-26 14:21:15.793318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.492 [2024-07-26 14:21:15.793334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.793957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.793974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.493 [2024-07-26 14:21:15.793989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.493 [2024-07-26 14:21:15.794022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.493 [2024-07-26 14:21:15.794053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.493 [2024-07-26 14:21:15.794086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.493 [2024-07-26 14:21:15.794118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.493 [2024-07-26 14:21:15.794149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.493 [2024-07-26 14:21:15.794181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.493 [2024-07-26 14:21:15.794212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.794244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.794276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.794314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.794346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.794377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.794407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.493 [2024-07-26 14:21:15.794446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.493 [2024-07-26 14:21:15.794498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.493 [2024-07-26 14:21:15.794510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30272 len:8 PRP1 0x0 PRP2 0x0 00:27:04.493 [2024-07-26 14:21:15.794524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794598] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1707d80 was disconnected and freed. reset controller. 00:27:04.493 [2024-07-26 14:21:15.794619] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:04.493 [2024-07-26 14:21:15.794659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.493 [2024-07-26 14:21:15.794679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.493 [2024-07-26 14:21:15.794710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.493 [2024-07-26 14:21:15.794725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.493 [2024-07-26 14:21:15.794739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.494 [2024-07-26 14:21:15.794755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.494 [2024-07-26 14:21:15.794769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.494 [2024-07-26 14:21:15.794783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.494 [2024-07-26 14:21:15.798395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.494 [2024-07-26 14:21:15.798447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e1790 (9): Bad file descriptor 00:27:04.494 [2024-07-26 14:21:15.879117] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:04.494 00:27:04.494 Latency(us) 00:27:04.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.494 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.494 Verification LBA range: start 0x0 length 0x4000 00:27:04.494 NVMe0n1 : 15.00 7865.28 30.72 666.84 0.00 14971.99 892.02 17864.63 00:27:04.494 =================================================================================================================== 00:27:04.494 Total : 7865.28 30.72 666.84 0.00 14971.99 892.02 17864.63 00:27:04.494 Received shutdown signal, test time was about 15.000000 seconds 00:27:04.494 00:27:04.494 Latency(us) 00:27:04.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.494 =================================================================================================================== 00:27:04.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2600535 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2600535 /var/tmp/bdevperf.sock 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2600535 ']' 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:04.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.494 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:04.753 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.753 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:04.753 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:05.011 [2024-07-26 14:21:21.872833] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:05.011 14:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:05.576 [2024-07-26 14:21:22.213871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:05.576 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:06.143 NVMe0n1 00:27:06.143 14:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:07.076 00:27:07.076 14:21:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:07.334 00:27:07.334 14:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:07.334 14:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:07.591 14:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:08.156 14:21:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:11.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:11.434 14:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:11.434 14:21:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2601442 00:27:11.434 14:21:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:11.434 14:21:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2601442 00:27:12.807 0 00:27:12.808 14:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:12.808 [2024-07-26 14:21:21.236873] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:27:12.808 [2024-07-26 14:21:21.236986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600535 ] 00:27:12.808 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.808 [2024-07-26 14:21:21.312013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.808 [2024-07-26 14:21:21.433214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.808 [2024-07-26 14:21:24.957774] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:12.808 [2024-07-26 14:21:24.957857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.808 [2024-07-26 14:21:24.957883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.808 [2024-07-26 14:21:24.957902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.808 [2024-07-26 14:21:24.957917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.808 [2024-07-26 14:21:24.957933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.808 [2024-07-26 14:21:24.957948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.808 [2024-07-26 14:21:24.957964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.808 [2024-07-26 14:21:24.957980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.808 [2024-07-26 14:21:24.958003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.808 [2024-07-26 14:21:24.958055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.808 [2024-07-26 14:21:24.958090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4790 (9): Bad file descriptor 00:27:12.808 [2024-07-26 14:21:25.010500] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:12.808 Running I/O for 1 seconds... 00:27:12.808 00:27:12.808 Latency(us) 00:27:12.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.808 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:12.808 Verification LBA range: start 0x0 length 0x4000 00:27:12.808 NVMe0n1 : 1.02 7823.52 30.56 0.00 0.00 16295.23 3519.53 17476.27 00:27:12.808 =================================================================================================================== 00:27:12.808 Total : 7823.52 30.56 0.00 0.00 16295.23 3519.53 17476.27 00:27:12.808 14:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:12.808 14:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:13.066 14:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:13.324 14:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:13.324 14:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:13.889 14:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:14.146 14:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:17.425 14:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:17.425 14:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2600535 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2600535 ']' 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2600535 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2600535 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2600535' 00:27:17.425 killing process with pid 2600535 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2600535 00:27:17.425 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2600535 00:27:17.684 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:17.685 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.252 14:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.252 rmmod nvme_tcp 00:27:18.252 rmmod nvme_fabrics 00:27:18.252 rmmod nvme_keyring 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2597553 ']' 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2597553 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2597553 ']' 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2597553 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2597553 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2597553' 00:27:18.252 killing process with pid 2597553 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2597553 00:27:18.252 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2597553 00:27:18.819 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.819 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.819 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.819 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.819 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.819 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.819 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.819 14:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.722 14:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.722 00:27:20.722 real 0m40.082s 00:27:20.722 user 2m22.403s 00:27:20.722 sys 0m7.255s 00:27:20.722 14:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.722 14:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:20.722 ************************************ 00:27:20.722 END TEST nvmf_failover 00:27:20.722 ************************************ 00:27:20.722 14:21:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:20.722 14:21:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.722 14:21:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.722 14:21:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.722 ************************************ 00:27:20.722 START TEST nvmf_host_discovery 00:27:20.722 ************************************ 00:27:20.722 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:20.981 * Looking for test storage... 00:27:20.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.981 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.982 14:21:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.540 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:23.541 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:23.541 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:23.541 Found net devices under 0000:84:00.0: cvl_0_0 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:23.541 Found net devices under 0000:84:00.1: cvl_0_1 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:27:23.541 00:27:23.541 --- 10.0.0.2 ping statistics --- 00:27:23.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.541 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:27:23.541 00:27:23.541 --- 10.0.0.1 ping statistics --- 00:27:23.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.541 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.541 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2604197 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2604197 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2604197 ']' 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:23.800 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 [2024-07-26 14:21:40.508095] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:27:23.800 [2024-07-26 14:21:40.508189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.800 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.800 [2024-07-26 14:21:40.589221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.058 [2024-07-26 14:21:40.711048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.058 [2024-07-26 14:21:40.711102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.058 [2024-07-26 14:21:40.711118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.058 [2024-07-26 14:21:40.711131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.058 [2024-07-26 14:21:40.711143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.058 [2024-07-26 14:21:40.711173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.058 [2024-07-26 14:21:40.867138] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.058 [2024-07-26 14:21:40.875362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.058 null0 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.058 null1 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.058 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2604225 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2604225 /tmp/host.sock 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2604225 ']' 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:24.059 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.059 14:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.317 [2024-07-26 14:21:40.962796] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:27:24.317 [2024-07-26 14:21:40.962895] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604225 ] 00:27:24.317 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.317 [2024-07-26 14:21:41.039004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.317 [2024-07-26 14:21:41.164605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:24.575 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.833 [2024-07-26 14:21:41.693526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:24.833 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:25.091 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.092 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:25.092 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.092 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.092 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:25.092 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:25.092 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.092 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:27:25.092 14:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:25.656 [2024-07-26 14:21:42.315080] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:25.656 [2024-07-26 14:21:42.315125] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:25.656 [2024-07-26 14:21:42.315154] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:25.656 [2024-07-26 14:21:42.402416] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:25.656 [2024-07-26 14:21:42.506496] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:25.656 [2024-07-26 14:21:42.506523] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:26.221 14:21:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.221 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.221 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:26.221 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.222 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.480 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:26.752 14:21:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:27.686 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:27.686 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:27.686 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:27.686 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:27.686 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:27.686 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.686 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.687 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.945 [2024-07-26 14:21:44.622394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:27.945 [2024-07-26 14:21:44.622873] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:27.945 [2024-07-26 14:21:44.622923] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.945 [2024-07-26 14:21:44.749256] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:27.945 14:21:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:27.945 [2024-07-26 14:21:44.813907] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:27.945 [2024-07-26 14:21:44.813934] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:27.945 [2024-07-26 14:21:44.813945] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.318 [2024-07-26 14:21:45.878234] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:29.318 [2024-07-26 14:21:45.878280] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:29.318 [2024-07-26 14:21:45.879783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.318 [2024-07-26 14:21:45.879820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.318 [2024-07-26 14:21:45.879839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.318 [2024-07-26 14:21:45.879855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.318 [2024-07-26 14:21:45.879870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.318 [2024-07-26 14:21:45.879885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.318 [2024-07-26 14:21:45.879900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.318 [2024-07-26 14:21:45.879916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.318 [2024-07-26 14:21:45.879932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ac230 is same with the state(5) to be set 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:29.318 [2024-07-26 14:21:45.889785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ac230 (9): Bad file descriptor 00:27:29.318 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.318 [2024-07-26 14:21:45.899841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.318 [2024-07-26 14:21:45.900278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.318 [2024-07-26 14:21:45.900329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ac230 with addr=10.0.0.2, port=4420 00:27:29.318 [2024-07-26 14:21:45.900349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ac230 is same with the state(5) to be set 00:27:29.318 [2024-07-26 14:21:45.900376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ac230 (9): Bad file descriptor 00:27:29.318 [2024-07-26 14:21:45.900414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.318 [2024-07-26 14:21:45.900444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.318 [2024-07-26 14:21:45.900465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.318 [2024-07-26 14:21:45.900500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.318 [2024-07-26 14:21:45.909945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.318 [2024-07-26 14:21:45.910231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.318 [2024-07-26 14:21:45.910280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ac230 with addr=10.0.0.2, port=4420 00:27:29.318 [2024-07-26 14:21:45.910298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ac230 is same with the state(5) to be set 00:27:29.318 [2024-07-26 14:21:45.910323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ac230 (9): Bad file descriptor 00:27:29.318 [2024-07-26 14:21:45.910346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.318 [2024-07-26 14:21:45.910361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.318 [2024-07-26 14:21:45.910376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.318 [2024-07-26 14:21:45.910397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.318 [2024-07-26 14:21:45.920022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.319 [2024-07-26 14:21:45.920361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-26 14:21:45.920412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ac230 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-26 14:21:45.920440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ac230 is same with the state(5) to be set 00:27:29.319 [2024-07-26 14:21:45.920468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ac230 (9): Bad file descriptor 00:27:29.319 [2024-07-26 14:21:45.920504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.319 [2024-07-26 14:21:45.920523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.319 [2024-07-26 14:21:45.920537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.319 [2024-07-26 14:21:45.920558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:29.319 [2024-07-26 14:21:45.930099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.319 [2024-07-26 14:21:45.930359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-26 14:21:45.930407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ac230 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-26 14:21:45.930426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ac230 is same with the state(5) to be set 00:27:29.319 [2024-07-26 14:21:45.930460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ac230 (9): Bad file descriptor 00:27:29.319 [2024-07-26 14:21:45.930483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.319 [2024-07-26 14:21:45.930499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.319 [2024-07-26 14:21:45.930514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.319 [2024-07-26 14:21:45.930534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:29.319 [2024-07-26 14:21:45.940180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.319 [2024-07-26 14:21:45.940408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-26 14:21:45.940479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ac230 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-26 14:21:45.940499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ac230 is same with the state(5) to be set 00:27:29.319 [2024-07-26 14:21:45.940524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ac230 (9): Bad file descriptor 00:27:29.319 [2024-07-26 14:21:45.940547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.319 [2024-07-26 14:21:45.940562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.319 [2024-07-26 14:21:45.940576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.319 [2024-07-26 14:21:45.940597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 [2024-07-26 14:21:45.950267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.319 [2024-07-26 14:21:45.950481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-26 14:21:45.950512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ac230 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-26 14:21:45.950530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ac230 is same with the state(5) to be set 00:27:29.319 [2024-07-26 14:21:45.950554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ac230 (9): Bad file descriptor 00:27:29.319 [2024-07-26 14:21:45.950577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.319 [2024-07-26 14:21:45.950592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.319 [2024-07-26 14:21:45.950606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.319 [2024-07-26 14:21:45.950626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.319 [2024-07-26 14:21:45.960344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.319 [2024-07-26 14:21:45.960549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-26 14:21:45.960580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ac230 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-26 14:21:45.960597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ac230 is same with the state(5) to be set 00:27:29.319 [2024-07-26 14:21:45.960621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ac230 (9): Bad file descriptor 00:27:29.319 [2024-07-26 14:21:45.960643] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.319 [2024-07-26 14:21:45.960658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.319 [2024-07-26 14:21:45.960673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.319 [2024-07-26 14:21:45.960693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 [2024-07-26 14:21:45.965851] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:29.319 [2024-07-26 14:21:45.965885] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:29.319 14:21:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:29.319 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.320 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.577 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:29.578 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:29.578 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:29.578 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:29.578 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.578 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.578 14:21:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.510 [2024-07-26 14:21:47.227887] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:30.510 [2024-07-26 14:21:47.227918] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:30.510 [2024-07-26 14:21:47.227943] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.510 [2024-07-26 14:21:47.314202] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:30.510 [2024-07-26 14:21:47.382813] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:30.510 [2024-07-26 14:21:47.382857] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.510 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.510 request: 00:27:30.510 { 00:27:30.510 "name": "nvme", 00:27:30.510 "trtype": "tcp", 00:27:30.510 "traddr": "10.0.0.2", 00:27:30.510 "adrfam": "ipv4", 00:27:30.510 "trsvcid": "8009", 00:27:30.510 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:30.510 "wait_for_attach": true, 00:27:30.510 "method": "bdev_nvme_start_discovery", 00:27:30.510 "req_id": 1 00:27:30.510 } 00:27:30.510 Got JSON-RPC error response 00:27:30.510 response: 00:27:30.510 { 00:27:30.510 "code": -17, 00:27:30.768 "message": "File exists" 00:27:30.768 } 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.768 request: 00:27:30.768 { 00:27:30.768 "name": "nvme_second", 00:27:30.768 "trtype": "tcp", 00:27:30.768 "traddr": "10.0.0.2", 00:27:30.768 "adrfam": "ipv4", 00:27:30.768 "trsvcid": "8009", 00:27:30.768 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:30.768 "wait_for_attach": true, 00:27:30.768 "method": "bdev_nvme_start_discovery", 00:27:30.768 "req_id": 1 00:27:30.768 } 00:27:30.768 Got JSON-RPC error response 00:27:30.768 response: 00:27:30.768 { 00:27:30.768 "code": -17, 00:27:30.768 "message": "File exists" 00:27:30.768 } 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:30.768 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:30.769 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:30.769 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:30.769 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.769 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:30.769 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.769 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:30.769 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.769 14:21:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.141 [2024-07-26 14:21:48.606922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.141 [2024-07-26 14:21:48.606994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6af6c0 with addr=10.0.0.2, port=8010 00:27:32.141 [2024-07-26 14:21:48.607029] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:32.141 [2024-07-26 14:21:48.607062] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:32.141 [2024-07-26 14:21:48.607078] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:33.074 [2024-07-26 14:21:49.609467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.074 [2024-07-26 14:21:49.609528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6af6c0 with addr=10.0.0.2, port=8010 00:27:33.074 [2024-07-26 14:21:49.609560] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:33.074 [2024-07-26 14:21:49.609577] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:33.074 [2024-07-26 14:21:49.609592] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:34.007 [2024-07-26 14:21:50.611483] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:34.007 request: 00:27:34.007 { 00:27:34.007 "name": "nvme_second", 00:27:34.007 "trtype": "tcp", 00:27:34.007 "traddr": "10.0.0.2", 00:27:34.007 "adrfam": "ipv4", 00:27:34.007 "trsvcid": "8010", 00:27:34.007 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:34.007 "wait_for_attach": false, 00:27:34.007 "attach_timeout_ms": 3000, 00:27:34.007 "method": "bdev_nvme_start_discovery", 00:27:34.007 "req_id": 1 00:27:34.007 } 00:27:34.007 Got JSON-RPC error response 00:27:34.007 response: 00:27:34.007 { 00:27:34.007 "code": -110, 00:27:34.007 "message": "Connection timed out" 00:27:34.007 } 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:34.007 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2604225 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:34.008 rmmod nvme_tcp 00:27:34.008 rmmod nvme_fabrics 00:27:34.008 rmmod nvme_keyring 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2604197 ']' 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2604197 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2604197 ']' 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2604197 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2604197 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2604197' 00:27:34.008 killing process with pid 2604197 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2604197 00:27:34.008 14:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2604197 00:27:34.266 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:34.267 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:34.267 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:34.267 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.267 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.267 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.267 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.267 14:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:36.812 00:27:36.812 real 0m15.551s 00:27:36.812 user 0m22.819s 00:27:36.812 sys 0m3.658s 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.812 ************************************ 00:27:36.812 END TEST nvmf_host_discovery 00:27:36.812 ************************************ 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.812 ************************************ 00:27:36.812 START TEST nvmf_host_multipath_status 00:27:36.812 ************************************ 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:36.812 * Looking for test storage... 00:27:36.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:36.812 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.813 14:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:39.347 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:39.347 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:39.347 Found net devices under 0000:84:00.0: cvl_0_0 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:39.347 Found net devices under 0000:84:00.1: cvl_0_1 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:39.347 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:39.348 14:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:39.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:27:39.348 00:27:39.348 --- 10.0.0.2 ping statistics --- 00:27:39.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.348 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:39.348 00:27:39.348 --- 10.0.0.1 ping statistics --- 00:27:39.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.348 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2607533 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2607533 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2607533 ']' 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:39.348 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:39.348 [2024-07-26 14:21:56.125282] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:27:39.348 [2024-07-26 14:21:56.125377] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.348 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.348 [2024-07-26 14:21:56.201849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:39.635 [2024-07-26 14:21:56.324946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.635 [2024-07-26 14:21:56.325010] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.635 [2024-07-26 14:21:56.325028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.635 [2024-07-26 14:21:56.325041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.635 [2024-07-26 14:21:56.325053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.635 [2024-07-26 14:21:56.325154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.635 [2024-07-26 14:21:56.325162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.635 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:39.635 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:27:39.635 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.635 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:39.635 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:39.635 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.635 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2607533 00:27:39.635 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:39.895 [2024-07-26 14:21:56.740573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.895 14:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:40.461 Malloc0 00:27:40.461 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:40.719 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:40.976 14:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.541 [2024-07-26 14:21:58.417080] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.798 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:42.056 [2024-07-26 14:21:58.770207] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2607902 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2607902 /var/tmp/bdevperf.sock 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2607902 ']' 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:42.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:42.056 14:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:42.988 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:42.988 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:27:42.988 14:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:43.554 14:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:44.487 Nvme0n1 00:27:44.487 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:45.051 Nvme0n1 00:27:45.051 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:45.051 14:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:46.948 14:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:46.948 14:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:47.514 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:48.081 14:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:49.016 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:49.016 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:49.016 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.016 14:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:49.274 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:49.274 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:49.274 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.274 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:49.839 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:49.839 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:49.839 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.839 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:50.097 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.097 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:50.097 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:50.097 14:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.355 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.355 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:50.355 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.355 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:50.918 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.918 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:50.918 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.918 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:51.176 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.176 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:51.177 14:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:51.434 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:51.692 14:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:53.065 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:53.065 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:53.065 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.065 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:53.065 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:53.065 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:53.065 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.065 14:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:53.323 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.323 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:53.323 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.323 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:53.580 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.580 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:53.580 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.580 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:54.144 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.144 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:54.144 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.144 14:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:54.402 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.402 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:54.402 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.402 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:54.992 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.992 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:54.992 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:55.250 14:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:55.508 14:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:56.441 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:56.441 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:56.441 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.441 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:57.006 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.006 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:57.006 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:57.006 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.265 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:57.265 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:57.265 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.265 14:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:57.522 14:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.522 14:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:57.522 14:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.522 14:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:57.780 14:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.780 14:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:57.780 14:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.780 14:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:58.345 14:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.345 14:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:58.345 14:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.345 14:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:58.911 14:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.911 14:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:58.911 14:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:59.169 14:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:59.427 14:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:00.361 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:00.361 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:00.361 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.361 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:00.618 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.618 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:00.618 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.618 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:01.184 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:01.184 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:01.184 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.184 14:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:01.442 14:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.443 14:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:01.443 14:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:01.443 14:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.701 14:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.701 14:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:01.701 14:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.701 14:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:02.267 14:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.267 14:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:02.267 14:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.267 14:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:02.833 14:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:02.833 14:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:02.833 14:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:03.091 14:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:03.657 14:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:04.589 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:04.589 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:04.589 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.589 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:04.847 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:04.847 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:04.847 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.847 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:05.104 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:05.104 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:05.104 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.104 14:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:05.362 14:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.362 14:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:05.362 14:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.362 14:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:05.929 14:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.929 14:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:05.929 14:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.929 14:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:06.495 14:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:06.495 14:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:06.495 14:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.495 14:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:06.752 14:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:06.752 14:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:06.752 14:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:07.009 14:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:07.268 14:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:08.640 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:08.640 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:08.640 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.640 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:08.640 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:08.640 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:08.640 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.640 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:09.207 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.207 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:09.207 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.207 14:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:09.465 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.465 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:09.465 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.465 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:09.723 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.723 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:09.723 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.723 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:09.983 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:09.983 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:09.983 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.983 14:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:10.276 14:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.276 14:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:11.210 14:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:11.210 14:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:11.210 14:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:11.468 14:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:12.842 14:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:12.842 14:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:12.842 14:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.842 14:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:12.842 14:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.842 14:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:12.842 14:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.842 14:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:13.407 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.407 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:13.407 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.407 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:13.665 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.665 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:13.665 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.665 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:13.923 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.923 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:13.923 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.923 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:14.182 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.182 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:14.182 14:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.182 14:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:14.748 14:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.748 14:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:14.748 14:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:15.006 14:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:15.264 14:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:16.199 14:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:16.199 14:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:16.199 14:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.199 14:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:16.766 14:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:16.766 14:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:16.766 14:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.766 14:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:17.024 14:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.024 14:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:17.024 14:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.024 14:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:17.589 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.589 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:17.589 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.589 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:17.847 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.847 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:17.847 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.847 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:18.105 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.105 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:18.105 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.105 14:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:18.672 14:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.672 14:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:18.672 14:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:18.930 14:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:19.496 14:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:20.431 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:20.431 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:20.431 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.431 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:20.689 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:20.689 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:20.689 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.689 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:21.256 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.256 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:21.256 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.256 14:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:21.514 14:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.514 14:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:21.514 14:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.514 14:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:22.081 14:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.081 14:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:22.081 14:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.081 14:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:22.339 14:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.339 14:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:22.339 14:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.339 14:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:22.597 14:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.597 14:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:22.597 14:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:23.166 14:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:23.732 14:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:24.665 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:24.665 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:24.665 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.665 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:24.924 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.924 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:24.924 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.924 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:25.182 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:25.182 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:25.182 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.182 14:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:25.472 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.472 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:25.472 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.472 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:25.735 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.735 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:25.735 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.735 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:26.301 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.301 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:26.301 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.301 14:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2607902 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2607902 ']' 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2607902 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2607902 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2607902' 00:28:26.559 killing process with pid 2607902 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2607902 00:28:26.559 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2607902 00:28:26.827 Connection closed with partial response: 00:28:26.827 00:28:26.827 00:28:26.827 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2607902 00:28:26.827 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:26.827 [2024-07-26 14:21:58.837163] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:28:26.827 [2024-07-26 14:21:58.837250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607902 ] 00:28:26.827 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.827 [2024-07-26 14:21:58.901012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.827 [2024-07-26 14:21:59.023648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.827 Running I/O for 90 seconds... 00:28:26.828 [2024-07-26 14:22:19.930514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.930580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.930658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.930683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.930711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.930740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.930766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.930784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.930810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.930828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.930854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.930872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.930897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.930916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.930941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.930960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.930985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.828 [2024-07-26 14:22:19.931323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.931967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.931985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.828 [2024-07-26 14:22:19.932437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.828 [2024-07-26 14:22:19.932458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.932913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.932932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.933965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.933983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.934012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.934031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.934060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.934078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.934106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.934125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.934154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.934177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.934396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.829 [2024-07-26 14:22:19.934421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.934466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.829 [2024-07-26 14:22:19.934488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.934519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.829 [2024-07-26 14:22:19.934539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:26.829 [2024-07-26 14:22:19.934571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.829 [2024-07-26 14:22:19.934589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.934620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.934638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.934669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.934687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.934718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.934737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.934768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.934786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.934817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.830 [2024-07-26 14:22:19.934835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.934867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.830 [2024-07-26 14:22:19.934885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.934927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.830 [2024-07-26 14:22:19.934947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.934978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.830 [2024-07-26 14:22:19.934996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.830 [2024-07-26 14:22:19.935059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.830 [2024-07-26 14:22:19.935109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.830 [2024-07-26 14:22:19.935615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.935956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.935974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.830 [2024-07-26 14:22:19.936673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:26.830 [2024-07-26 14:22:19.936723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.936742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.936777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.936796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.936831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.936850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.936884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.936903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.936938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.936957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.936992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.937045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.937104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.937157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.937211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.937266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.937319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.937373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:19.937426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.831 [2024-07-26 14:22:19.937453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.334905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.334968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.335014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.335035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.336614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.336644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.340977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.340995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.341020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.341038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.341062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.341080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.341104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.341122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.831 [2024-07-26 14:22:40.341146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.831 [2024-07-26 14:22:40.341165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.341551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.341577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.832 [2024-07-26 14:22:40.342947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.832 [2024-07-26 14:22:40.342972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.342990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.343033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.343075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.343117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.343166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.343209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.343252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.343295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.343338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.343383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.343426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.343487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.343514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.343533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.345795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.345824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.345856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.345876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.345902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.345920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.345945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.345963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.345987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.346005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.346048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.346090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.346719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.833 [2024-07-26 14:22:40.346737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.347098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.347123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.347153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.347173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.347198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.347216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:26.833 [2024-07-26 14:22:40.347240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.833 [2024-07-26 14:22:40.347258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.347969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.347987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.348029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.348070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.348111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.348158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.348201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.348243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.348285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.348326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.348351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.348369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.349248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.349297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.349341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.349382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.349424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.349477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.349519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.834 [2024-07-26 14:22:40.349568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.349609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.349651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.349693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.349735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:26.834 [2024-07-26 14:22:40.349759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.834 [2024-07-26 14:22:40.349776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.349801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.349818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.349842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.349860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.349884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.349902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.349926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.349943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.349968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.349985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.350026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.350077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.350119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.350161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.350202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.350244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.350286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.350328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.350370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.350411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.350464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.350507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.350532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.350549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.351387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.351447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.351492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.835 [2024-07-26 14:22:40.351958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.351983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.352000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.352024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.352042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.352066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.352084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.352108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.352126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.835 [2024-07-26 14:22:40.352603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.835 [2024-07-26 14:22:40.352631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.352661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.352681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.352705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.352724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.352748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.352765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.352789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.352806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.352830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.352848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.352872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.352889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.352920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.352938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.352963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.352980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.353005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.353022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.353046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.353064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.353089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.353106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.353130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.353148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.353172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.353189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.353214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.353231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.355404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.355710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.355751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.355841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.836 [2024-07-26 14:22:40.355883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.836 [2024-07-26 14:22:40.355967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.836 [2024-07-26 14:22:40.355991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.356008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.356032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.356050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.356074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.356092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.358163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.358213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.358257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.358906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.358947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.358972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.358989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.359013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.359030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.359054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.359072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.359096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.359114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.360017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.837 [2024-07-26 14:22:40.360046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.360077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.360096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.360121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.360139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.837 [2024-07-26 14:22:40.360164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.837 [2024-07-26 14:22:40.360181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.360223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.360266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.360315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.360357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.360399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.360453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.360497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.360542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.360584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.360626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.360668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.360709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.360734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.360751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.362244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.362563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.362606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.362648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.362970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.838 [2024-07-26 14:22:40.362987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.363012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.838 [2024-07-26 14:22:40.363029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:26.838 [2024-07-26 14:22:40.363054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.363071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.363112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.363154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.363195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.363236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.363277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.363323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.363367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.363408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.363460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.363502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.363544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.363585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.363627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.363652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.363669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.366192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.366641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.366682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.366724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.366766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.366807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.366849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.366898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.366964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.366981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.367006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.367023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.367047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.367065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.367089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.839 [2024-07-26 14:22:40.367107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.367131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.367148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.839 [2024-07-26 14:22:40.367172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.839 [2024-07-26 14:22:40.367189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.367214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.367231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.368464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.368516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.368558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.368608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.368650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.368694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.368736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.368777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.368818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.368860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.368902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.368943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.368967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.368984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.369026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.369068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.369118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.369162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.369203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.369244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.369286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.369328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.369353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.369371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.370136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.370187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.370236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.370279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.840 [2024-07-26 14:22:40.370322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.370369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.370413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.840 [2024-07-26 14:22:40.370477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:26.840 [2024-07-26 14:22:40.370502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.370520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.370562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.370604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.370645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.370694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.370737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.370778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.370820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.370861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.370903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.370952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.370977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.370994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.371019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.371037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.371061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.371078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.371103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.371121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.372618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.372668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.372713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.372757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.372799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.372843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.372886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.372935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.372960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.372978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.373020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.373063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.373104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.373147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.373190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.373233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.373276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.373318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.373359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.373401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.841 [2024-07-26 14:22:40.373456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.373501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:26.841 [2024-07-26 14:22:40.373526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.841 [2024-07-26 14:22:40.373544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.373585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.373627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.373668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.373709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.373751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.373793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.373835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.373876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.373918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.373965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.373990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.374008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.374032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.374049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.374074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.374091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.374115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.374132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.374157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.374174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.374199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.374217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.376710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.376740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.376780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.376800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.376826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.376844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.376869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.376887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.376914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.376931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.376957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.376975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.377006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.377025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.377050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.377067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.377092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.842 [2024-07-26 14:22:40.377110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.842 [2024-07-26 14:22:40.377134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.842 [2024-07-26 14:22:40.377152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.377497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.377545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.377587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.377628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.377719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.377761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.377802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.377827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.377845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.378509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.378559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.378691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.378972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.378990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.379031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.379073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.379115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.379157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.379203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.379247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.843 [2024-07-26 14:22:40.379291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.379335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.843 [2024-07-26 14:22:40.379360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.843 [2024-07-26 14:22:40.379378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.380690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.380719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.380760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.380779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.380804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.380821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.380846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.380864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.380888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.380906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.380930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.380948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.380972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.380990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.381930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.381972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.381996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.382013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.382038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.382055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.382085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.382104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.384672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.844 [2024-07-26 14:22:40.384702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.384743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.384763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:26.844 [2024-07-26 14:22:40.384789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.844 [2024-07-26 14:22:40.384808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.384833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.384851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.384875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.384893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.384917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.384935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.384959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.384976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.385885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.385968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.385992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.386010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.386034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.845 [2024-07-26 14:22:40.386051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.386076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.386093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.386118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.386136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:26.845 [2024-07-26 14:22:40.386808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-26 14:22:40.386837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.386868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.386888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.386923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.386942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.386968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.386985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.387009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.387027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.387051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.387070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.387095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.387112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.387136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.387154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.387178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.387195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.387220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.387237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.387261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.387279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.387303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.387321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.388780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.388810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.388861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.388883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.388915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.388934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.388960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.388978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.846 [2024-07-26 14:22:40.389835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:26.846 [2024-07-26 14:22:40.389905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-26 14:22:40.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.389948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-26 14:22:40.389971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.389996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-26 14:22:40.390014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.390038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.390055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.390080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.390097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.390122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.390139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.390163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.390180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.390206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.390224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.391395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.391455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.391499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.391541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.391583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.391625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.847 [2024-07-26 14:22:40.391675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-26 14:22:40.391717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.847 [2024-07-26 14:22:40.391742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-26 14:22:40.391760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.847 Received shutdown signal, test time was about 41.331953 seconds 00:28:26.847 00:28:26.847 Latency(us) 00:28:26.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.847 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:26.847 Verification LBA range: start 0x0 length 0x4000 00:28:26.847 Nvme0n1 : 41.33 7425.15 29.00 0.00 0.00 17208.13 336.78 5020737.23 00:28:26.847 =================================================================================================================== 00:28:26.847 Total : 7425.15 29.00 0.00 0.00 17208.13 336.78 5020737.23 00:28:26.847 14:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:27.413 rmmod nvme_tcp 00:28:27.413 rmmod nvme_fabrics 00:28:27.413 rmmod nvme_keyring 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2607533 ']' 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2607533 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2607533 ']' 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2607533 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2607533 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2607533' 00:28:27.413 killing process with pid 2607533 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2607533 00:28:27.413 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2607533 00:28:27.980 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:27.981 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:27.981 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:27.981 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:27.981 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:27.981 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.981 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.981 14:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:29.883 00:28:29.883 real 0m53.445s 00:28:29.883 user 2m45.955s 00:28:29.883 sys 0m14.142s 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:29.883 ************************************ 00:28:29.883 END TEST nvmf_host_multipath_status 00:28:29.883 ************************************ 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.883 ************************************ 00:28:29.883 START TEST nvmf_discovery_remove_ifc 00:28:29.883 ************************************ 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:29.883 * Looking for test storage... 00:28:29.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.883 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.142 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.143 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.143 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.143 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.143 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.143 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.143 14:22:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:32.677 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:32.677 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.677 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:32.678 Found net devices under 0000:84:00.0: cvl_0_0 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:32.678 Found net devices under 0000:84:00.1: cvl_0_1 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:32.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:28:32.678 00:28:32.678 --- 10.0.0.2 ping statistics --- 00:28:32.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.678 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:28:32.678 00:28:32.678 --- 10.0.0.1 ping statistics --- 00:28:32.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.678 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2615208 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2615208 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2615208 ']' 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.678 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.937 [2024-07-26 14:22:49.566824] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:28:32.937 [2024-07-26 14:22:49.566925] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.937 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.937 [2024-07-26 14:22:49.662550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.937 [2024-07-26 14:22:49.802289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.937 [2024-07-26 14:22:49.802362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.937 [2024-07-26 14:22:49.802383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.937 [2024-07-26 14:22:49.802400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.937 [2024-07-26 14:22:49.802414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.937 [2024-07-26 14:22:49.802460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.195 14:22:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.195 [2024-07-26 14:22:49.980919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.195 [2024-07-26 14:22:49.989156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:33.195 null0 00:28:33.195 [2024-07-26 14:22:50.021170] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2615352 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2615352 /tmp/host.sock 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2615352 ']' 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:33.195 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:33.195 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.453 [2024-07-26 14:22:50.149464] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:28:33.453 [2024-07-26 14:22:50.149650] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615352 ] 00:28:33.453 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.453 [2024-07-26 14:22:50.259303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.711 [2024-07-26 14:22:50.384685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.968 14:22:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 [2024-07-26 14:22:51.790168] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:35.341 [2024-07-26 14:22:51.790197] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:35.341 [2024-07-26 14:22:51.790224] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:35.341 [2024-07-26 14:22:51.877514] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:35.341 [2024-07-26 14:22:51.981249] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:35.341 [2024-07-26 14:22:51.981318] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:35.341 [2024-07-26 14:22:51.981364] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:35.341 [2024-07-26 14:22:51.981391] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:35.341 [2024-07-26 14:22:51.981417] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.341 [2024-07-26 14:22:51.988137] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19d7e50 was disconnected and freed. delete nvme_qpair. 00:28:35.341 14:22:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:35.341 14:22:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.276 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.276 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.276 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.276 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.276 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.276 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.276 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.276 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.533 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:36.533 14:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:37.465 14:22:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:38.837 14:22:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:39.781 14:22:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.715 [2024-07-26 14:22:57.422249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:40.715 [2024-07-26 14:22:57.422327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.715 [2024-07-26 14:22:57.422351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.715 [2024-07-26 14:22:57.422372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.715 [2024-07-26 14:22:57.422388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.715 [2024-07-26 14:22:57.422404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.715 [2024-07-26 14:22:57.422419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.715 [2024-07-26 14:22:57.422443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.715 [2024-07-26 14:22:57.422459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.715 [2024-07-26 14:22:57.422475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.715 [2024-07-26 14:22:57.422490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.715 [2024-07-26 14:22:57.422506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199e890 is same with the state(5) to be set 00:28:40.715 [2024-07-26 14:22:57.432266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199e890 (9): Bad file descriptor 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:40.715 14:22:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.715 [2024-07-26 14:22:57.442317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:41.653 [2024-07-26 14:22:58.469503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:41.653 [2024-07-26 14:22:58.469579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199e890 with addr=10.0.0.2, port=4420 00:28:41.653 [2024-07-26 14:22:58.469611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199e890 is same with the state(5) to be set 00:28:41.653 [2024-07-26 14:22:58.469670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199e890 (9): Bad file descriptor 00:28:41.653 [2024-07-26 14:22:58.470226] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:41.653 [2024-07-26 14:22:58.470281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.653 [2024-07-26 14:22:58.470302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.653 [2024-07-26 14:22:58.470322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.653 [2024-07-26 14:22:58.470362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.653 [2024-07-26 14:22:58.470384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:41.653 14:22:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:42.629 [2024-07-26 14:22:59.472891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:42.629 [2024-07-26 14:22:59.472926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:42.629 [2024-07-26 14:22:59.472942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:42.629 [2024-07-26 14:22:59.472958] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:42.629 [2024-07-26 14:22:59.472981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.629 [2024-07-26 14:22:59.473024] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:42.629 [2024-07-26 14:22:59.473065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.629 [2024-07-26 14:22:59.473089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.629 [2024-07-26 14:22:59.473120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.629 [2024-07-26 14:22:59.473137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.629 [2024-07-26 14:22:59.473153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.629 [2024-07-26 14:22:59.473167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.629 [2024-07-26 14:22:59.473183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.629 [2024-07-26 14:22:59.473197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.629 [2024-07-26 14:22:59.473213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.629 [2024-07-26 14:22:59.473227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.629 [2024-07-26 14:22:59.473242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:42.629 [2024-07-26 14:22:59.473291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199dcf0 (9): Bad file descriptor 00:28:42.629 [2024-07-26 14:22:59.474288] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:42.629 [2024-07-26 14:22:59.474313] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:42.887 14:22:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:43.821 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.821 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.821 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.821 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.821 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:43.821 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.821 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.821 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.079 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:44.079 14:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:44.645 [2024-07-26 14:23:01.526586] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:44.645 [2024-07-26 14:23:01.526623] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:44.645 [2024-07-26 14:23:01.526651] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:44.903 [2024-07-26 14:23:01.612914] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:44.903 14:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:45.162 [2024-07-26 14:23:01.797620] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:45.162 [2024-07-26 14:23:01.797682] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:45.162 [2024-07-26 14:23:01.797734] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:45.162 [2024-07-26 14:23:01.797762] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:45.162 [2024-07-26 14:23:01.797778] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:45.162 [2024-07-26 14:23:01.804263] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19e17d0 was disconnected and freed. delete nvme_qpair. 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2615352 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2615352 ']' 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2615352 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2615352 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2615352' 00:28:46.096 killing process with pid 2615352 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2615352 00:28:46.096 14:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2615352 00:28:46.354 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:46.354 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:46.354 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:28:46.354 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:46.354 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:28:46.354 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:46.354 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:46.354 rmmod nvme_tcp 00:28:46.354 rmmod nvme_fabrics 00:28:46.354 rmmod nvme_keyring 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2615208 ']' 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2615208 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2615208 ']' 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2615208 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2615208 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2615208' 00:28:46.612 killing process with pid 2615208 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2615208 00:28:46.612 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2615208 00:28:46.870 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:46.870 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:46.870 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:46.870 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:46.870 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:46.870 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.870 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.870 14:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.793 14:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:48.793 00:28:48.793 real 0m18.960s 00:28:48.793 user 0m27.330s 00:28:48.793 sys 0m3.786s 00:28:48.793 14:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.793 14:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:48.793 ************************************ 00:28:48.793 END TEST nvmf_discovery_remove_ifc 00:28:48.793 ************************************ 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.052 ************************************ 00:28:49.052 START TEST nvmf_identify_kernel_target 00:28:49.052 ************************************ 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:49.052 * Looking for test storage... 00:28:49.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.052 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.053 14:23:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:52.340 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:52.340 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:52.340 Found net devices under 0000:84:00.0: cvl_0_0 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:52.340 Found net devices under 0000:84:00.1: cvl_0_1 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:52.340 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:52.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:28:52.341 00:28:52.341 --- 10.0.0.2 ping statistics --- 00:28:52.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.341 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:28:52.341 00:28:52.341 --- 10.0.0.1 ping statistics --- 00:28:52.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.341 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:52.341 14:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:53.277 Waiting for block devices as requested 00:28:53.277 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:28:53.536 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:53.536 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:53.794 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:53.794 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:53.794 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:53.794 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:54.053 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:54.053 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:54.053 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:54.053 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:54.312 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:54.312 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:54.312 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:54.312 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:54.571 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:54.571 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:54.571 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:54.831 No valid GPT data, bailing 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:28:54.831 00:28:54.831 Discovery Log Number of Records 2, Generation counter 2 00:28:54.831 =====Discovery Log Entry 0====== 00:28:54.831 trtype: tcp 00:28:54.831 adrfam: ipv4 00:28:54.831 subtype: current discovery subsystem 00:28:54.831 treq: not specified, sq flow control disable supported 00:28:54.831 portid: 1 00:28:54.831 trsvcid: 4420 00:28:54.831 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:54.831 traddr: 10.0.0.1 00:28:54.831 eflags: none 00:28:54.831 sectype: none 00:28:54.831 =====Discovery Log Entry 1====== 00:28:54.831 trtype: tcp 00:28:54.831 adrfam: ipv4 00:28:54.831 subtype: nvme subsystem 00:28:54.831 treq: not specified, sq flow control disable supported 00:28:54.831 portid: 1 00:28:54.831 trsvcid: 4420 00:28:54.831 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:54.831 traddr: 10.0.0.1 00:28:54.831 eflags: none 00:28:54.831 sectype: none 00:28:54.831 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:54.831 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:54.831 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.091 ===================================================== 00:28:55.091 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:55.091 ===================================================== 00:28:55.091 Controller Capabilities/Features 00:28:55.091 ================================ 00:28:55.091 Vendor ID: 0000 00:28:55.091 Subsystem Vendor ID: 0000 00:28:55.091 Serial Number: 918ed68e7b6478b1b878 00:28:55.091 Model Number: Linux 00:28:55.091 Firmware Version: 6.7.0-68 00:28:55.091 Recommended Arb Burst: 0 00:28:55.091 IEEE OUI Identifier: 00 00 00 00:28:55.091 Multi-path I/O 00:28:55.091 May have multiple subsystem ports: No 00:28:55.091 May have multiple controllers: No 00:28:55.091 Associated with SR-IOV VF: No 00:28:55.091 Max Data Transfer Size: Unlimited 00:28:55.091 Max Number of Namespaces: 0 00:28:55.091 Max Number of I/O Queues: 1024 00:28:55.091 NVMe Specification Version (VS): 1.3 00:28:55.091 NVMe Specification Version (Identify): 1.3 00:28:55.091 Maximum Queue Entries: 1024 00:28:55.091 Contiguous Queues Required: No 00:28:55.091 Arbitration Mechanisms Supported 00:28:55.091 Weighted Round Robin: Not Supported 00:28:55.091 Vendor Specific: Not Supported 00:28:55.091 Reset Timeout: 7500 ms 00:28:55.091 Doorbell Stride: 4 bytes 00:28:55.091 NVM Subsystem Reset: Not Supported 00:28:55.091 Command Sets Supported 00:28:55.091 NVM Command Set: Supported 00:28:55.091 Boot Partition: Not Supported 00:28:55.091 Memory Page Size Minimum: 4096 bytes 00:28:55.091 Memory Page Size Maximum: 4096 bytes 00:28:55.091 Persistent Memory Region: Not Supported 00:28:55.091 Optional Asynchronous Events Supported 00:28:55.091 Namespace Attribute Notices: Not Supported 00:28:55.091 Firmware Activation Notices: Not Supported 00:28:55.091 ANA Change Notices: Not Supported 00:28:55.091 PLE Aggregate Log Change Notices: Not Supported 00:28:55.091 LBA Status Info Alert Notices: Not Supported 00:28:55.091 EGE Aggregate Log Change Notices: Not Supported 00:28:55.091 Normal NVM Subsystem Shutdown event: Not Supported 00:28:55.091 Zone Descriptor Change Notices: Not Supported 00:28:55.091 Discovery Log Change Notices: Supported 00:28:55.091 Controller Attributes 00:28:55.091 128-bit Host Identifier: Not Supported 00:28:55.091 Non-Operational Permissive Mode: Not Supported 00:28:55.091 NVM Sets: Not Supported 00:28:55.091 Read Recovery Levels: Not Supported 00:28:55.091 Endurance Groups: Not Supported 00:28:55.091 Predictable Latency Mode: Not Supported 00:28:55.091 Traffic Based Keep ALive: Not Supported 00:28:55.091 Namespace Granularity: Not Supported 00:28:55.091 SQ Associations: Not Supported 00:28:55.091 UUID List: Not Supported 00:28:55.091 Multi-Domain Subsystem: Not Supported 00:28:55.091 Fixed Capacity Management: Not Supported 00:28:55.091 Variable Capacity Management: Not Supported 00:28:55.091 Delete Endurance Group: Not Supported 00:28:55.091 Delete NVM Set: Not Supported 00:28:55.091 Extended LBA Formats Supported: Not Supported 00:28:55.091 Flexible Data Placement Supported: Not Supported 00:28:55.091 00:28:55.091 Controller Memory Buffer Support 00:28:55.091 ================================ 00:28:55.091 Supported: No 00:28:55.091 00:28:55.091 Persistent Memory Region Support 00:28:55.091 ================================ 00:28:55.091 Supported: No 00:28:55.091 00:28:55.091 Admin Command Set Attributes 00:28:55.091 ============================ 00:28:55.091 Security Send/Receive: Not Supported 00:28:55.091 Format NVM: Not Supported 00:28:55.091 Firmware Activate/Download: Not Supported 00:28:55.091 Namespace Management: Not Supported 00:28:55.091 Device Self-Test: Not Supported 00:28:55.091 Directives: Not Supported 00:28:55.091 NVMe-MI: Not Supported 00:28:55.091 Virtualization Management: Not Supported 00:28:55.091 Doorbell Buffer Config: Not Supported 00:28:55.091 Get LBA Status Capability: Not Supported 00:28:55.091 Command & Feature Lockdown Capability: Not Supported 00:28:55.091 Abort Command Limit: 1 00:28:55.091 Async Event Request Limit: 1 00:28:55.091 Number of Firmware Slots: N/A 00:28:55.091 Firmware Slot 1 Read-Only: N/A 00:28:55.091 Firmware Activation Without Reset: N/A 00:28:55.091 Multiple Update Detection Support: N/A 00:28:55.091 Firmware Update Granularity: No Information Provided 00:28:55.091 Per-Namespace SMART Log: No 00:28:55.091 Asymmetric Namespace Access Log Page: Not Supported 00:28:55.092 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:55.092 Command Effects Log Page: Not Supported 00:28:55.092 Get Log Page Extended Data: Supported 00:28:55.092 Telemetry Log Pages: Not Supported 00:28:55.092 Persistent Event Log Pages: Not Supported 00:28:55.092 Supported Log Pages Log Page: May Support 00:28:55.092 Commands Supported & Effects Log Page: Not Supported 00:28:55.092 Feature Identifiers & Effects Log Page:May Support 00:28:55.092 NVMe-MI Commands & Effects Log Page: May Support 00:28:55.092 Data Area 4 for Telemetry Log: Not Supported 00:28:55.092 Error Log Page Entries Supported: 1 00:28:55.092 Keep Alive: Not Supported 00:28:55.092 00:28:55.092 NVM Command Set Attributes 00:28:55.092 ========================== 00:28:55.092 Submission Queue Entry Size 00:28:55.092 Max: 1 00:28:55.092 Min: 1 00:28:55.092 Completion Queue Entry Size 00:28:55.092 Max: 1 00:28:55.092 Min: 1 00:28:55.092 Number of Namespaces: 0 00:28:55.092 Compare Command: Not Supported 00:28:55.092 Write Uncorrectable Command: Not Supported 00:28:55.092 Dataset Management Command: Not Supported 00:28:55.092 Write Zeroes Command: Not Supported 00:28:55.092 Set Features Save Field: Not Supported 00:28:55.092 Reservations: Not Supported 00:28:55.092 Timestamp: Not Supported 00:28:55.092 Copy: Not Supported 00:28:55.092 Volatile Write Cache: Not Present 00:28:55.092 Atomic Write Unit (Normal): 1 00:28:55.092 Atomic Write Unit (PFail): 1 00:28:55.092 Atomic Compare & Write Unit: 1 00:28:55.092 Fused Compare & Write: Not Supported 00:28:55.092 Scatter-Gather List 00:28:55.092 SGL Command Set: Supported 00:28:55.092 SGL Keyed: Not Supported 00:28:55.092 SGL Bit Bucket Descriptor: Not Supported 00:28:55.092 SGL Metadata Pointer: Not Supported 00:28:55.092 Oversized SGL: Not Supported 00:28:55.092 SGL Metadata Address: Not Supported 00:28:55.092 SGL Offset: Supported 00:28:55.092 Transport SGL Data Block: Not Supported 00:28:55.092 Replay Protected Memory Block: Not Supported 00:28:55.092 00:28:55.092 Firmware Slot Information 00:28:55.092 ========================= 00:28:55.092 Active slot: 0 00:28:55.092 00:28:55.092 00:28:55.092 Error Log 00:28:55.092 ========= 00:28:55.092 00:28:55.092 Active Namespaces 00:28:55.092 ================= 00:28:55.092 Discovery Log Page 00:28:55.092 ================== 00:28:55.092 Generation Counter: 2 00:28:55.092 Number of Records: 2 00:28:55.092 Record Format: 0 00:28:55.092 00:28:55.092 Discovery Log Entry 0 00:28:55.092 ---------------------- 00:28:55.092 Transport Type: 3 (TCP) 00:28:55.092 Address Family: 1 (IPv4) 00:28:55.092 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:55.092 Entry Flags: 00:28:55.092 Duplicate Returned Information: 0 00:28:55.092 Explicit Persistent Connection Support for Discovery: 0 00:28:55.092 Transport Requirements: 00:28:55.092 Secure Channel: Not Specified 00:28:55.092 Port ID: 1 (0x0001) 00:28:55.092 Controller ID: 65535 (0xffff) 00:28:55.092 Admin Max SQ Size: 32 00:28:55.092 Transport Service Identifier: 4420 00:28:55.092 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:55.092 Transport Address: 10.0.0.1 00:28:55.092 Discovery Log Entry 1 00:28:55.092 ---------------------- 00:28:55.092 Transport Type: 3 (TCP) 00:28:55.092 Address Family: 1 (IPv4) 00:28:55.092 Subsystem Type: 2 (NVM Subsystem) 00:28:55.092 Entry Flags: 00:28:55.092 Duplicate Returned Information: 0 00:28:55.092 Explicit Persistent Connection Support for Discovery: 0 00:28:55.092 Transport Requirements: 00:28:55.092 Secure Channel: Not Specified 00:28:55.092 Port ID: 1 (0x0001) 00:28:55.092 Controller ID: 65535 (0xffff) 00:28:55.092 Admin Max SQ Size: 32 00:28:55.092 Transport Service Identifier: 4420 00:28:55.092 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:55.092 Transport Address: 10.0.0.1 00:28:55.092 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:55.092 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.092 get_feature(0x01) failed 00:28:55.092 get_feature(0x02) failed 00:28:55.092 get_feature(0x04) failed 00:28:55.092 ===================================================== 00:28:55.092 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:55.092 ===================================================== 00:28:55.092 Controller Capabilities/Features 00:28:55.092 ================================ 00:28:55.092 Vendor ID: 0000 00:28:55.092 Subsystem Vendor ID: 0000 00:28:55.092 Serial Number: 2dc21df3bcef35e7df6d 00:28:55.092 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:55.092 Firmware Version: 6.7.0-68 00:28:55.092 Recommended Arb Burst: 6 00:28:55.092 IEEE OUI Identifier: 00 00 00 00:28:55.092 Multi-path I/O 00:28:55.092 May have multiple subsystem ports: Yes 00:28:55.092 May have multiple controllers: Yes 00:28:55.092 Associated with SR-IOV VF: No 00:28:55.092 Max Data Transfer Size: Unlimited 00:28:55.092 Max Number of Namespaces: 1024 00:28:55.092 Max Number of I/O Queues: 128 00:28:55.092 NVMe Specification Version (VS): 1.3 00:28:55.092 NVMe Specification Version (Identify): 1.3 00:28:55.092 Maximum Queue Entries: 1024 00:28:55.092 Contiguous Queues Required: No 00:28:55.092 Arbitration Mechanisms Supported 00:28:55.092 Weighted Round Robin: Not Supported 00:28:55.092 Vendor Specific: Not Supported 00:28:55.092 Reset Timeout: 7500 ms 00:28:55.092 Doorbell Stride: 4 bytes 00:28:55.092 NVM Subsystem Reset: Not Supported 00:28:55.092 Command Sets Supported 00:28:55.092 NVM Command Set: Supported 00:28:55.092 Boot Partition: Not Supported 00:28:55.092 Memory Page Size Minimum: 4096 bytes 00:28:55.092 Memory Page Size Maximum: 4096 bytes 00:28:55.092 Persistent Memory Region: Not Supported 00:28:55.092 Optional Asynchronous Events Supported 00:28:55.092 Namespace Attribute Notices: Supported 00:28:55.092 Firmware Activation Notices: Not Supported 00:28:55.092 ANA Change Notices: Supported 00:28:55.092 PLE Aggregate Log Change Notices: Not Supported 00:28:55.092 LBA Status Info Alert Notices: Not Supported 00:28:55.092 EGE Aggregate Log Change Notices: Not Supported 00:28:55.092 Normal NVM Subsystem Shutdown event: Not Supported 00:28:55.092 Zone Descriptor Change Notices: Not Supported 00:28:55.092 Discovery Log Change Notices: Not Supported 00:28:55.092 Controller Attributes 00:28:55.092 128-bit Host Identifier: Supported 00:28:55.092 Non-Operational Permissive Mode: Not Supported 00:28:55.092 NVM Sets: Not Supported 00:28:55.092 Read Recovery Levels: Not Supported 00:28:55.092 Endurance Groups: Not Supported 00:28:55.092 Predictable Latency Mode: Not Supported 00:28:55.092 Traffic Based Keep ALive: Supported 00:28:55.092 Namespace Granularity: Not Supported 00:28:55.092 SQ Associations: Not Supported 00:28:55.092 UUID List: Not Supported 00:28:55.092 Multi-Domain Subsystem: Not Supported 00:28:55.092 Fixed Capacity Management: Not Supported 00:28:55.092 Variable Capacity Management: Not Supported 00:28:55.092 Delete Endurance Group: Not Supported 00:28:55.092 Delete NVM Set: Not Supported 00:28:55.092 Extended LBA Formats Supported: Not Supported 00:28:55.092 Flexible Data Placement Supported: Not Supported 00:28:55.092 00:28:55.092 Controller Memory Buffer Support 00:28:55.092 ================================ 00:28:55.092 Supported: No 00:28:55.092 00:28:55.092 Persistent Memory Region Support 00:28:55.092 ================================ 00:28:55.092 Supported: No 00:28:55.092 00:28:55.092 Admin Command Set Attributes 00:28:55.092 ============================ 00:28:55.092 Security Send/Receive: Not Supported 00:28:55.092 Format NVM: Not Supported 00:28:55.092 Firmware Activate/Download: Not Supported 00:28:55.092 Namespace Management: Not Supported 00:28:55.092 Device Self-Test: Not Supported 00:28:55.092 Directives: Not Supported 00:28:55.092 NVMe-MI: Not Supported 00:28:55.092 Virtualization Management: Not Supported 00:28:55.092 Doorbell Buffer Config: Not Supported 00:28:55.092 Get LBA Status Capability: Not Supported 00:28:55.092 Command & Feature Lockdown Capability: Not Supported 00:28:55.092 Abort Command Limit: 4 00:28:55.092 Async Event Request Limit: 4 00:28:55.092 Number of Firmware Slots: N/A 00:28:55.093 Firmware Slot 1 Read-Only: N/A 00:28:55.093 Firmware Activation Without Reset: N/A 00:28:55.093 Multiple Update Detection Support: N/A 00:28:55.093 Firmware Update Granularity: No Information Provided 00:28:55.093 Per-Namespace SMART Log: Yes 00:28:55.093 Asymmetric Namespace Access Log Page: Supported 00:28:55.093 ANA Transition Time : 10 sec 00:28:55.093 00:28:55.093 Asymmetric Namespace Access Capabilities 00:28:55.093 ANA Optimized State : Supported 00:28:55.093 ANA Non-Optimized State : Supported 00:28:55.093 ANA Inaccessible State : Supported 00:28:55.093 ANA Persistent Loss State : Supported 00:28:55.093 ANA Change State : Supported 00:28:55.093 ANAGRPID is not changed : No 00:28:55.093 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:55.093 00:28:55.093 ANA Group Identifier Maximum : 128 00:28:55.093 Number of ANA Group Identifiers : 128 00:28:55.093 Max Number of Allowed Namespaces : 1024 00:28:55.093 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:55.093 Command Effects Log Page: Supported 00:28:55.093 Get Log Page Extended Data: Supported 00:28:55.093 Telemetry Log Pages: Not Supported 00:28:55.093 Persistent Event Log Pages: Not Supported 00:28:55.093 Supported Log Pages Log Page: May Support 00:28:55.093 Commands Supported & Effects Log Page: Not Supported 00:28:55.093 Feature Identifiers & Effects Log Page:May Support 00:28:55.093 NVMe-MI Commands & Effects Log Page: May Support 00:28:55.093 Data Area 4 for Telemetry Log: Not Supported 00:28:55.093 Error Log Page Entries Supported: 128 00:28:55.093 Keep Alive: Supported 00:28:55.093 Keep Alive Granularity: 1000 ms 00:28:55.093 00:28:55.093 NVM Command Set Attributes 00:28:55.093 ========================== 00:28:55.093 Submission Queue Entry Size 00:28:55.093 Max: 64 00:28:55.093 Min: 64 00:28:55.093 Completion Queue Entry Size 00:28:55.093 Max: 16 00:28:55.093 Min: 16 00:28:55.093 Number of Namespaces: 1024 00:28:55.093 Compare Command: Not Supported 00:28:55.093 Write Uncorrectable Command: Not Supported 00:28:55.093 Dataset Management Command: Supported 00:28:55.093 Write Zeroes Command: Supported 00:28:55.093 Set Features Save Field: Not Supported 00:28:55.093 Reservations: Not Supported 00:28:55.093 Timestamp: Not Supported 00:28:55.093 Copy: Not Supported 00:28:55.093 Volatile Write Cache: Present 00:28:55.093 Atomic Write Unit (Normal): 1 00:28:55.093 Atomic Write Unit (PFail): 1 00:28:55.093 Atomic Compare & Write Unit: 1 00:28:55.093 Fused Compare & Write: Not Supported 00:28:55.093 Scatter-Gather List 00:28:55.093 SGL Command Set: Supported 00:28:55.093 SGL Keyed: Not Supported 00:28:55.093 SGL Bit Bucket Descriptor: Not Supported 00:28:55.093 SGL Metadata Pointer: Not Supported 00:28:55.093 Oversized SGL: Not Supported 00:28:55.093 SGL Metadata Address: Not Supported 00:28:55.093 SGL Offset: Supported 00:28:55.093 Transport SGL Data Block: Not Supported 00:28:55.093 Replay Protected Memory Block: Not Supported 00:28:55.093 00:28:55.093 Firmware Slot Information 00:28:55.093 ========================= 00:28:55.093 Active slot: 0 00:28:55.093 00:28:55.093 Asymmetric Namespace Access 00:28:55.093 =========================== 00:28:55.093 Change Count : 0 00:28:55.093 Number of ANA Group Descriptors : 1 00:28:55.093 ANA Group Descriptor : 0 00:28:55.093 ANA Group ID : 1 00:28:55.093 Number of NSID Values : 1 00:28:55.093 Change Count : 0 00:28:55.093 ANA State : 1 00:28:55.093 Namespace Identifier : 1 00:28:55.093 00:28:55.093 Commands Supported and Effects 00:28:55.093 ============================== 00:28:55.093 Admin Commands 00:28:55.093 -------------- 00:28:55.093 Get Log Page (02h): Supported 00:28:55.093 Identify (06h): Supported 00:28:55.093 Abort (08h): Supported 00:28:55.093 Set Features (09h): Supported 00:28:55.093 Get Features (0Ah): Supported 00:28:55.093 Asynchronous Event Request (0Ch): Supported 00:28:55.093 Keep Alive (18h): Supported 00:28:55.093 I/O Commands 00:28:55.093 ------------ 00:28:55.093 Flush (00h): Supported 00:28:55.093 Write (01h): Supported LBA-Change 00:28:55.093 Read (02h): Supported 00:28:55.093 Write Zeroes (08h): Supported LBA-Change 00:28:55.093 Dataset Management (09h): Supported 00:28:55.093 00:28:55.093 Error Log 00:28:55.093 ========= 00:28:55.093 Entry: 0 00:28:55.093 Error Count: 0x3 00:28:55.093 Submission Queue Id: 0x0 00:28:55.093 Command Id: 0x5 00:28:55.093 Phase Bit: 0 00:28:55.093 Status Code: 0x2 00:28:55.093 Status Code Type: 0x0 00:28:55.093 Do Not Retry: 1 00:28:55.093 Error Location: 0x28 00:28:55.093 LBA: 0x0 00:28:55.093 Namespace: 0x0 00:28:55.093 Vendor Log Page: 0x0 00:28:55.093 ----------- 00:28:55.093 Entry: 1 00:28:55.093 Error Count: 0x2 00:28:55.093 Submission Queue Id: 0x0 00:28:55.093 Command Id: 0x5 00:28:55.093 Phase Bit: 0 00:28:55.093 Status Code: 0x2 00:28:55.093 Status Code Type: 0x0 00:28:55.093 Do Not Retry: 1 00:28:55.093 Error Location: 0x28 00:28:55.093 LBA: 0x0 00:28:55.093 Namespace: 0x0 00:28:55.093 Vendor Log Page: 0x0 00:28:55.093 ----------- 00:28:55.093 Entry: 2 00:28:55.093 Error Count: 0x1 00:28:55.093 Submission Queue Id: 0x0 00:28:55.093 Command Id: 0x4 00:28:55.093 Phase Bit: 0 00:28:55.093 Status Code: 0x2 00:28:55.093 Status Code Type: 0x0 00:28:55.093 Do Not Retry: 1 00:28:55.093 Error Location: 0x28 00:28:55.093 LBA: 0x0 00:28:55.093 Namespace: 0x0 00:28:55.093 Vendor Log Page: 0x0 00:28:55.093 00:28:55.093 Number of Queues 00:28:55.093 ================ 00:28:55.093 Number of I/O Submission Queues: 128 00:28:55.093 Number of I/O Completion Queues: 128 00:28:55.093 00:28:55.093 ZNS Specific Controller Data 00:28:55.093 ============================ 00:28:55.093 Zone Append Size Limit: 0 00:28:55.093 00:28:55.093 00:28:55.093 Active Namespaces 00:28:55.093 ================= 00:28:55.093 get_feature(0x05) failed 00:28:55.093 Namespace ID:1 00:28:55.093 Command Set Identifier: NVM (00h) 00:28:55.093 Deallocate: Supported 00:28:55.093 Deallocated/Unwritten Error: Not Supported 00:28:55.093 Deallocated Read Value: Unknown 00:28:55.093 Deallocate in Write Zeroes: Not Supported 00:28:55.093 Deallocated Guard Field: 0xFFFF 00:28:55.093 Flush: Supported 00:28:55.093 Reservation: Not Supported 00:28:55.093 Namespace Sharing Capabilities: Multiple Controllers 00:28:55.093 Size (in LBAs): 1953525168 (931GiB) 00:28:55.093 Capacity (in LBAs): 1953525168 (931GiB) 00:28:55.093 Utilization (in LBAs): 1953525168 (931GiB) 00:28:55.093 UUID: 5d3b1f25-82e2-497d-9bae-5366b0d9a532 00:28:55.093 Thin Provisioning: Not Supported 00:28:55.093 Per-NS Atomic Units: Yes 00:28:55.093 Atomic Boundary Size (Normal): 0 00:28:55.093 Atomic Boundary Size (PFail): 0 00:28:55.093 Atomic Boundary Offset: 0 00:28:55.093 NGUID/EUI64 Never Reused: No 00:28:55.093 ANA group ID: 1 00:28:55.093 Namespace Write Protected: No 00:28:55.093 Number of LBA Formats: 1 00:28:55.093 Current LBA Format: LBA Format #00 00:28:55.093 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:55.093 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:55.093 rmmod nvme_tcp 00:28:55.093 rmmod nvme_fabrics 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.093 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.094 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.094 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.094 14:23:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.627 14:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:57.627 14:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:57.627 14:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:57.627 14:23:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:57.627 14:23:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:57.627 14:23:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:57.627 14:23:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:57.627 14:23:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:57.627 14:23:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:57.627 14:23:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:57.627 14:23:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:58.614 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:58.614 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:58.614 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:58.614 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:58.614 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:58.614 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:58.614 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:58.614 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:58.873 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:58.873 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:58.873 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:58.873 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:58.873 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:58.873 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:58.873 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:58.873 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:59.810 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:28:59.810 00:28:59.810 real 0m10.815s 00:28:59.810 user 0m2.266s 00:28:59.810 sys 0m4.576s 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.810 ************************************ 00:28:59.810 END TEST nvmf_identify_kernel_target 00:28:59.810 ************************************ 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.810 ************************************ 00:28:59.810 START TEST nvmf_auth_host 00:28:59.810 ************************************ 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:59.810 * Looking for test storage... 00:28:59.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:59.810 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:59.811 14:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.343 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:02.344 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:02.344 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:02.344 Found net devices under 0000:84:00.0: cvl_0_0 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:02.344 Found net devices under 0000:84:00.1: cvl_0_1 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.344 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:02.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:29:02.604 00:29:02.604 --- 10.0.0.2 ping statistics --- 00:29:02.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.604 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:29:02.604 00:29:02.604 --- 10.0.0.1 ping statistics --- 00:29:02.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.604 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2622613 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2622613 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2622613 ']' 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.604 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f355536dc5b7d236b87715c8e8a5ecd1 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9r8 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f355536dc5b7d236b87715c8e8a5ecd1 0 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f355536dc5b7d236b87715c8e8a5ecd1 0 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f355536dc5b7d236b87715c8e8a5ecd1 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9r8 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9r8 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.9r8 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e15c7c7291819101b7014ef7653845f6276a3f26b8f1b4c9551f823aa447cb3 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kSZ 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e15c7c7291819101b7014ef7653845f6276a3f26b8f1b4c9551f823aa447cb3 3 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e15c7c7291819101b7014ef7653845f6276a3f26b8f1b4c9551f823aa447cb3 3 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e15c7c7291819101b7014ef7653845f6276a3f26b8f1b4c9551f823aa447cb3 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kSZ 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kSZ 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kSZ 00:29:03.171 14:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:03.171 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.171 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.171 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.171 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:03.171 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:03.171 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:03.171 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f60237a243f1392bdcf14a0f5976429af3ff1797277096e 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5DH 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f60237a243f1392bdcf14a0f5976429af3ff1797277096e 0 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f60237a243f1392bdcf14a0f5976429af3ff1797277096e 0 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f60237a243f1392bdcf14a0f5976429af3ff1797277096e 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.172 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5DH 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5DH 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5DH 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:03.429 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8219caf58b3f954fbb9a11430cefdef0c84049738569e58a 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ooz 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8219caf58b3f954fbb9a11430cefdef0c84049738569e58a 2 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8219caf58b3f954fbb9a11430cefdef0c84049738569e58a 2 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8219caf58b3f954fbb9a11430cefdef0c84049738569e58a 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ooz 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ooz 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ooz 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b4c498deddd76e339289f66dcae61d5d 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8Ul 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b4c498deddd76e339289f66dcae61d5d 1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b4c498deddd76e339289f66dcae61d5d 1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b4c498deddd76e339289f66dcae61d5d 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8Ul 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8Ul 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8Ul 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=795491f5677cc2a1212853fb25485d57 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Z1r 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 795491f5677cc2a1212853fb25485d57 1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 795491f5677cc2a1212853fb25485d57 1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=795491f5677cc2a1212853fb25485d57 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:29:03.430 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.687 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Z1r 00:29:03.687 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Z1r 00:29:03.687 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Z1r 00:29:03.687 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:03.687 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=199bd0122888a8c5bc721646e65269c20a28ff0009a54d6f 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rFf 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 199bd0122888a8c5bc721646e65269c20a28ff0009a54d6f 2 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 199bd0122888a8c5bc721646e65269c20a28ff0009a54d6f 2 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=199bd0122888a8c5bc721646e65269c20a28ff0009a54d6f 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rFf 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rFf 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rFf 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=54087defb8cd2fe8421944bdbbc735a5 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sy9 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 54087defb8cd2fe8421944bdbbc735a5 0 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 54087defb8cd2fe8421944bdbbc735a5 0 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=54087defb8cd2fe8421944bdbbc735a5 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sy9 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sy9 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sy9 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f157fedaa122935cfd6aa614022947173576c7b513d559a991dcb61f8fece77d 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JEq 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f157fedaa122935cfd6aa614022947173576c7b513d559a991dcb61f8fece77d 3 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f157fedaa122935cfd6aa614022947173576c7b513d559a991dcb61f8fece77d 3 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f157fedaa122935cfd6aa614022947173576c7b513d559a991dcb61f8fece77d 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JEq 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JEq 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JEq 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2622613 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2622613 ']' 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:03.688 14:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9r8 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kSZ ]] 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kSZ 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5DH 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ooz ]] 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ooz 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.621 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8Ul 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Z1r ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z1r 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rFf 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sy9 ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sy9 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JEq 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:04.622 14:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:05.555 Waiting for block devices as requested 00:29:05.555 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:29:05.812 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:05.812 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:06.069 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:06.069 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:06.069 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:06.069 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:06.326 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:06.326 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:06.326 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:06.587 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:06.587 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:06.587 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:06.847 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:06.847 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:06.847 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:06.847 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:07.412 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:07.413 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:07.413 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:07.413 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:07.413 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:07.413 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:07.413 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:07.413 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:07.413 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:07.672 No valid GPT data, bailing 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:29:07.672 00:29:07.672 Discovery Log Number of Records 2, Generation counter 2 00:29:07.672 =====Discovery Log Entry 0====== 00:29:07.672 trtype: tcp 00:29:07.672 adrfam: ipv4 00:29:07.672 subtype: current discovery subsystem 00:29:07.672 treq: not specified, sq flow control disable supported 00:29:07.672 portid: 1 00:29:07.672 trsvcid: 4420 00:29:07.672 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:07.672 traddr: 10.0.0.1 00:29:07.672 eflags: none 00:29:07.672 sectype: none 00:29:07.672 =====Discovery Log Entry 1====== 00:29:07.672 trtype: tcp 00:29:07.672 adrfam: ipv4 00:29:07.672 subtype: nvme subsystem 00:29:07.672 treq: not specified, sq flow control disable supported 00:29:07.672 portid: 1 00:29:07.672 trsvcid: 4420 00:29:07.672 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:07.672 traddr: 10.0.0.1 00:29:07.672 eflags: none 00:29:07.672 sectype: none 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.672 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.673 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.931 nvme0n1 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.931 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.932 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.190 nvme0n1 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.190 14:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.448 nvme0n1 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.448 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.706 nvme0n1 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.706 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.964 nvme0n1 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.964 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.965 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.223 nvme0n1 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.223 14:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.481 nvme0n1 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:09.481 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.482 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.740 nvme0n1 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.740 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.999 nvme0n1 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.999 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.000 14:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 nvme0n1 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.258 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.516 nvme0n1 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.516 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.775 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.033 nvme0n1 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.033 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:11.291 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.292 14:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.550 nvme0n1 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.550 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.808 nvme0n1 00:29:11.808 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.808 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.808 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.808 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.808 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.086 14:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.362 nvme0n1 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:12.362 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.363 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.929 nvme0n1 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:12.929 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.930 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:12.930 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:12.930 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:12.930 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:12.930 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.930 14:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.496 nvme0n1 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.496 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:13.497 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.497 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:13.497 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:13.497 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:13.497 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:13.497 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.497 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.062 nvme0n1 00:29:14.062 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.062 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.062 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.062 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.062 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.062 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.320 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.320 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.320 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.320 14:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.320 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.886 nvme0n1 00:29:14.886 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.886 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.886 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.886 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.886 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.886 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.144 14:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.710 nvme0n1 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.710 14:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.276 nvme0n1 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.276 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.277 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.277 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.277 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:16.277 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.277 14:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.661 nvme0n1 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.661 14:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.593 nvme0n1 00:29:18.593 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.593 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.593 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.593 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.593 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.593 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.851 14:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.784 nvme0n1 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.784 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.785 14:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.158 nvme0n1 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.158 14:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.091 nvme0n1 00:29:22.091 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.091 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.091 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.091 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.091 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.091 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.349 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.349 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.349 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.349 14:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:22.349 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.350 nvme0n1 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.350 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.608 nvme0n1 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.608 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:22.609 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.609 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.867 nvme0n1 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.867 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.125 nvme0n1 00:29:23.125 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.125 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.125 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.125 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.125 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.126 14:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.384 nvme0n1 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.384 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.385 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.385 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.385 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.385 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.385 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:23.385 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.385 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.643 nvme0n1 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.643 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.901 nvme0n1 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:23.901 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.902 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.160 nvme0n1 00:29:24.160 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.160 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.160 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.160 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.160 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.160 14:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.160 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.418 nvme0n1 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.418 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.676 nvme0n1 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.676 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.934 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.192 nvme0n1 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.192 14:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.192 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.193 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.758 nvme0n1 00:29:25.758 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.758 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.759 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.038 nvme0n1 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.038 14:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.334 nvme0n1 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.335 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.899 nvme0n1 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:26.899 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.900 14:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.465 nvme0n1 00:29:27.465 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.465 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.465 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.465 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.465 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.723 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.724 14:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.290 nvme0n1 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.290 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.291 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:28.291 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.291 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.224 nvme0n1 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.224 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.225 14:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.791 nvme0n1 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.791 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.792 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:29.792 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.792 14:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.357 nvme0n1 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.357 14:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.729 nvme0n1 00:29:31.729 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.729 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.729 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.729 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.729 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.729 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.730 14:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.102 nvme0n1 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.102 14:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.475 nvme0n1 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.475 14:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 nvme0n1 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.847 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:35.848 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.848 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:35.848 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:35.848 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:35.848 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:35.848 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.848 14:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.778 nvme0n1 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.778 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.036 nvme0n1 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.036 14:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.293 nvme0n1 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.293 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.550 nvme0n1 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.550 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.808 nvme0n1 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.808 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.066 nvme0n1 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:38.066 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.067 14:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.324 nvme0n1 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.324 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.582 nvme0n1 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.582 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.840 nvme0n1 00:29:38.840 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.840 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.840 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.840 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.840 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.840 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.098 nvme0n1 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.098 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.356 14:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.356 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.357 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.614 nvme0n1 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:39.614 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.615 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.873 nvme0n1 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.873 14:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.464 nvme0n1 00:29:40.464 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.464 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.465 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.724 nvme0n1 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.724 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.725 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.298 nvme0n1 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:41.298 14:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:41.298 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:41.298 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:41.298 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.298 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:41.298 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:41.298 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.299 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.556 nvme0n1 00:29:41.556 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.556 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.556 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.556 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.556 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.556 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.556 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.556 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.557 14:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.122 nvme0n1 00:29:42.122 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.122 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.122 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.122 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.122 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.380 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.946 nvme0n1 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:42.946 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.947 14:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.881 nvme0n1 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.881 14:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.447 nvme0n1 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.447 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:44.705 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.706 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.272 nvme0n1 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjM1NTUzNmRjNWI3ZDIzNmI4NzcxNWM4ZThhNWVjZDENrfRl: 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUxNWM3YzcyOTE4MTkxMDFiNzAxNGVmNzY1Mzg0NWY2Mjc2YTNmMjZiOGYxYjRjOTU1MWY4MjNhYTQ0N2NiM9d4Alc=: 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.272 14:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.205 nvme0n1 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:46.205 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.463 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.463 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.463 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.463 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:46.463 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:46.463 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:46.463 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.464 14:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.397 nvme0n1 00:29:47.397 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.397 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.397 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.397 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.397 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.397 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjRjNDk4ZGVkZGQ3NmUzMzkyODlmNjZkY2FlNjFkNWT7d7kv: 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: ]] 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Nzk1NDkxZjU2NzdjYzJhMTIxMjg1M2ZiMjU0ODVkNTfMfyZr: 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.655 14:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.588 nvme0n1 00:29:48.588 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.588 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.588 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.588 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.588 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.588 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTk5YmQwMTIyODg4YThjNWJjNzIxNjQ2ZTY1MjY5YzIwYTI4ZmYwMDA5YTU0ZDZmyebklg==: 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: ]] 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQwODdkZWZiOGNkMmZlODQyMTk0NGJkYmJjNzM1YTU31WD0: 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.846 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:48.847 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.847 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:48.847 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:48.847 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:48.847 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:48.847 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.847 14:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.780 nvme0n1 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjE1N2ZlZGFhMTIyOTM1Y2ZkNmFhNjE0MDIyOTQ3MTczNTc2YzdiNTEzZDU1OWE5OTFkY2I2MWY4ZmVjZTc3ZIYViTk=: 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.780 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.038 14:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.972 nvme0n1 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY2MDIzN2EyNDNmMTM5MmJkY2YxNGEwZjU5NzY0MjlhZjNmZjE3OTcyNzcwOTZlK5YOCw==: 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: ]] 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODIxOWNhZjU4YjNmOTU0ZmJiOWExMTQzMGNlZmRlZjBjODQwNDk3Mzg1NjllNThhAW2EbA==: 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.972 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.973 request: 00:29:50.973 { 00:29:50.973 "name": "nvme0", 00:29:50.973 "trtype": "tcp", 00:29:50.973 "traddr": "10.0.0.1", 00:29:50.973 "adrfam": "ipv4", 00:29:50.973 "trsvcid": "4420", 00:29:50.973 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:50.973 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:50.973 "prchk_reftag": false, 00:29:50.973 "prchk_guard": false, 00:29:50.973 "hdgst": false, 00:29:50.973 "ddgst": false, 00:29:50.973 "method": "bdev_nvme_attach_controller", 00:29:50.973 "req_id": 1 00:29:50.973 } 00:29:50.973 Got JSON-RPC error response 00:29:50.973 response: 00:29:50.973 { 00:29:50.973 "code": -5, 00:29:50.973 "message": "Input/output error" 00:29:50.973 } 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.973 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.231 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:51.231 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:51.231 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:51.231 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:51.231 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:51.231 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.231 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.231 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.232 request: 00:29:51.232 { 00:29:51.232 "name": "nvme0", 00:29:51.232 "trtype": "tcp", 00:29:51.232 "traddr": "10.0.0.1", 00:29:51.232 "adrfam": "ipv4", 00:29:51.232 "trsvcid": "4420", 00:29:51.232 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:51.232 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:51.232 "prchk_reftag": false, 00:29:51.232 "prchk_guard": false, 00:29:51.232 "hdgst": false, 00:29:51.232 "ddgst": false, 00:29:51.232 "dhchap_key": "key2", 00:29:51.232 "method": "bdev_nvme_attach_controller", 00:29:51.232 "req_id": 1 00:29:51.232 } 00:29:51.232 Got JSON-RPC error response 00:29:51.232 response: 00:29:51.232 { 00:29:51.232 "code": -5, 00:29:51.232 "message": "Input/output error" 00:29:51.232 } 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.232 14:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.232 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.232 request: 00:29:51.232 { 00:29:51.232 "name": "nvme0", 00:29:51.232 "trtype": "tcp", 00:29:51.232 "traddr": "10.0.0.1", 00:29:51.232 "adrfam": "ipv4", 00:29:51.491 "trsvcid": "4420", 00:29:51.491 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:51.491 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:51.491 "prchk_reftag": false, 00:29:51.491 "prchk_guard": false, 00:29:51.491 "hdgst": false, 00:29:51.491 "ddgst": false, 00:29:51.491 "dhchap_key": "key1", 00:29:51.491 "dhchap_ctrlr_key": "ckey2", 00:29:51.491 "method": "bdev_nvme_attach_controller", 00:29:51.491 "req_id": 1 00:29:51.491 } 00:29:51.491 Got JSON-RPC error response 00:29:51.491 response: 00:29:51.491 { 00:29:51.491 "code": -5, 00:29:51.491 "message": "Input/output error" 00:29:51.491 } 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:51.491 rmmod nvme_tcp 00:29:51.491 rmmod nvme_fabrics 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2622613 ']' 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2622613 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2622613 ']' 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2622613 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2622613 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2622613' 00:29:51.491 killing process with pid 2622613 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2622613 00:29:51.491 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2622613 00:29:51.749 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:51.749 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:51.750 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:51.750 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:51.750 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:51.750 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.750 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.750 14:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:53.654 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:53.913 14:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:55.318 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:55.318 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:55.579 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:55.579 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:55.579 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:55.579 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:55.579 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:55.579 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:55.579 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:55.579 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:55.579 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:55.579 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:55.579 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:55.579 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:55.579 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:55.579 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:56.521 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:29:56.521 14:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.9r8 /tmp/spdk.key-null.5DH /tmp/spdk.key-sha256.8Ul /tmp/spdk.key-sha384.rFf /tmp/spdk.key-sha512.JEq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:56.521 14:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:57.905 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:57.905 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:57.905 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:57.905 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:57.905 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:57.905 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:57.905 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:57.905 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:57.905 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:57.905 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:57.905 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:57.905 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:57.905 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:57.905 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:57.905 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:57.905 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:57.905 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:58.164 00:29:58.164 real 0m58.330s 00:29:58.164 user 0m56.396s 00:29:58.164 sys 0m7.446s 00:29:58.164 14:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.164 14:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.164 ************************************ 00:29:58.164 END TEST nvmf_auth_host 00:29:58.164 ************************************ 00:29:58.164 14:24:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:58.164 14:24:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:58.164 14:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:58.164 14:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:58.164 14:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.164 ************************************ 00:29:58.165 START TEST nvmf_digest 00:29:58.165 ************************************ 00:29:58.165 14:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:58.165 * Looking for test storage... 00:29:58.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:58.165 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.423 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.423 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.423 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:58.423 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:58.423 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:58.423 14:24:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:00.955 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:00.955 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:00.955 Found net devices under 0000:84:00.0: cvl_0_0 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:00.955 Found net devices under 0000:84:00.1: cvl_0_1 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:00.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:30:00.955 00:30:00.955 --- 10.0.0.2 ping statistics --- 00:30:00.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.955 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:30:00.955 00:30:00.955 --- 10.0.0.1 ping statistics --- 00:30:00.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.955 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.955 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:00.956 ************************************ 00:30:00.956 START TEST nvmf_digest_clean 00:30:00.956 ************************************ 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2633628 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2633628 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2633628 ']' 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.956 14:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:00.956 [2024-07-26 14:24:17.758466] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:00.956 [2024-07-26 14:24:17.758563] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.956 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.956 [2024-07-26 14:24:17.835881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.214 [2024-07-26 14:24:17.954833] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.214 [2024-07-26 14:24:17.954900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.214 [2024-07-26 14:24:17.954917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.214 [2024-07-26 14:24:17.954931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.214 [2024-07-26 14:24:17.954943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.214 [2024-07-26 14:24:17.954982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.214 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:01.472 null0 00:30:01.472 [2024-07-26 14:24:18.146438] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.473 [2024-07-26 14:24:18.170691] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2633723 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2633723 /var/tmp/bperf.sock 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2633723 ']' 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:01.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:01.473 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:01.473 [2024-07-26 14:24:18.221085] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:01.473 [2024-07-26 14:24:18.221160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633723 ] 00:30:01.473 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.473 [2024-07-26 14:24:18.287543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.731 [2024-07-26 14:24:18.408598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.731 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.731 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:01.731 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:01.731 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:01.731 14:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:02.298 14:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.298 14:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.863 nvme0n1 00:30:02.863 14:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:02.863 14:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:02.863 Running I/O for 2 seconds... 00:30:05.392 00:30:05.392 Latency(us) 00:30:05.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.392 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:05.392 nvme0n1 : 2.00 18413.62 71.93 0.00 0.00 6942.21 3155.44 17573.36 00:30:05.392 =================================================================================================================== 00:30:05.392 Total : 18413.62 71.93 0.00 0.00 6942.21 3155.44 17573.36 00:30:05.392 0 00:30:05.392 14:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:05.392 14:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:05.392 14:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:05.392 14:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:05.392 | select(.opcode=="crc32c") 00:30:05.392 | "\(.module_name) \(.executed)"' 00:30:05.392 14:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2633723 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2633723 ']' 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2633723 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2633723 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2633723' 00:30:05.392 killing process with pid 2633723 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2633723 00:30:05.392 Received shutdown signal, test time was about 2.000000 seconds 00:30:05.392 00:30:05.392 Latency(us) 00:30:05.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.392 =================================================================================================================== 00:30:05.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.392 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2633723 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2634183 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2634183 /var/tmp/bperf.sock 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2634183 ']' 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:05.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.651 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:05.651 [2024-07-26 14:24:22.405706] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:05.651 [2024-07-26 14:24:22.405812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634183 ] 00:30:05.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:05.651 Zero copy mechanism will not be used. 00:30:05.651 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.651 [2024-07-26 14:24:22.482061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.909 [2024-07-26 14:24:22.604087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.167 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.167 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:06.167 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:06.167 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:06.167 14:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:06.425 14:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.425 14:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:07.358 nvme0n1 00:30:07.358 14:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:07.358 14:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:07.358 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:07.358 Zero copy mechanism will not be used. 00:30:07.358 Running I/O for 2 seconds... 00:30:09.258 00:30:09.258 Latency(us) 00:30:09.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.258 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:09.258 nvme0n1 : 2.00 3167.04 395.88 0.00 0.00 5047.83 1262.17 10728.49 00:30:09.258 =================================================================================================================== 00:30:09.258 Total : 3167.04 395.88 0.00 0.00 5047.83 1262.17 10728.49 00:30:09.258 0 00:30:09.258 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:09.258 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:09.258 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:09.258 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:09.258 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:09.258 | select(.opcode=="crc32c") 00:30:09.258 | "\(.module_name) \(.executed)"' 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2634183 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2634183 ']' 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2634183 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2634183 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2634183' 00:30:09.823 killing process with pid 2634183 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2634183 00:30:09.823 Received shutdown signal, test time was about 2.000000 seconds 00:30:09.823 00:30:09.823 Latency(us) 00:30:09.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.823 =================================================================================================================== 00:30:09.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:09.823 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2634183 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2634715 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2634715 /var/tmp/bperf.sock 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2634715 ']' 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:10.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:10.080 14:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:10.080 [2024-07-26 14:24:26.861008] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:10.080 [2024-07-26 14:24:26.861108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634715 ] 00:30:10.080 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.080 [2024-07-26 14:24:26.935748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.338 [2024-07-26 14:24:27.058519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.596 14:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:10.596 14:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:10.596 14:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:10.596 14:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:10.596 14:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:10.854 14:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.854 14:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:11.443 nvme0n1 00:30:11.443 14:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:11.443 14:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:11.443 Running I/O for 2 seconds... 00:30:13.982 00:30:13.982 Latency(us) 00:30:13.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.982 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.982 nvme0n1 : 2.00 19895.56 77.72 0.00 0.00 6421.90 2645.71 11068.30 00:30:13.982 =================================================================================================================== 00:30:13.982 Total : 19895.56 77.72 0.00 0.00 6421.90 2645.71 11068.30 00:30:13.982 0 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:13.982 | select(.opcode=="crc32c") 00:30:13.982 | "\(.module_name) \(.executed)"' 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2634715 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2634715 ']' 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2634715 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:13.982 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2634715 00:30:14.253 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:14.253 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:14.253 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2634715' 00:30:14.253 killing process with pid 2634715 00:30:14.253 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2634715 00:30:14.253 Received shutdown signal, test time was about 2.000000 seconds 00:30:14.253 00:30:14.253 Latency(us) 00:30:14.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.253 =================================================================================================================== 00:30:14.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.253 14:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2634715 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2635253 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2635253 /var/tmp/bperf.sock 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2635253 ']' 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:14.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:14.512 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:14.512 [2024-07-26 14:24:31.214238] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:14.512 [2024-07-26 14:24:31.214329] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635253 ] 00:30:14.512 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:14.512 Zero copy mechanism will not be used. 00:30:14.512 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.512 [2024-07-26 14:24:31.282164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.769 [2024-07-26 14:24:31.400101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.769 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:14.769 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:14.769 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:14.769 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:14.769 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:15.028 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:15.028 14:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:15.593 nvme0n1 00:30:15.593 14:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:15.593 14:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:15.851 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:15.851 Zero copy mechanism will not be used. 00:30:15.851 Running I/O for 2 seconds... 00:30:17.750 00:30:17.750 Latency(us) 00:30:17.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.750 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:17.750 nvme0n1 : 2.01 2944.85 368.11 0.00 0.00 5419.87 2815.62 7864.32 00:30:17.750 =================================================================================================================== 00:30:17.750 Total : 2944.85 368.11 0.00 0.00 5419.87 2815.62 7864.32 00:30:17.750 0 00:30:17.750 14:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:17.750 14:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:17.750 14:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:17.750 14:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:17.750 14:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:17.750 | select(.opcode=="crc32c") 00:30:17.750 | "\(.module_name) \(.executed)"' 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2635253 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2635253 ']' 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2635253 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2635253 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2635253' 00:30:18.683 killing process with pid 2635253 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2635253 00:30:18.683 Received shutdown signal, test time was about 2.000000 seconds 00:30:18.683 00:30:18.683 Latency(us) 00:30:18.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.683 =================================================================================================================== 00:30:18.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.683 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2635253 00:30:18.941 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2633628 00:30:18.941 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2633628 ']' 00:30:18.941 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2633628 00:30:18.941 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:18.941 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:18.941 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2633628 00:30:18.941 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:18.942 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:18.942 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2633628' 00:30:18.942 killing process with pid 2633628 00:30:18.942 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2633628 00:30:18.942 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2633628 00:30:19.200 00:30:19.200 real 0m18.223s 00:30:19.200 user 0m37.986s 00:30:19.200 sys 0m4.891s 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.200 ************************************ 00:30:19.200 END TEST nvmf_digest_clean 00:30:19.200 ************************************ 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:19.200 ************************************ 00:30:19.200 START TEST nvmf_digest_error 00:30:19.200 ************************************ 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2635815 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2635815 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2635815 ']' 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:19.200 14:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:19.200 [2024-07-26 14:24:36.047346] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:19.200 [2024-07-26 14:24:36.047445] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.200 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.459 [2024-07-26 14:24:36.122534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.459 [2024-07-26 14:24:36.242874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.459 [2024-07-26 14:24:36.242933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.459 [2024-07-26 14:24:36.242949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.459 [2024-07-26 14:24:36.242962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.459 [2024-07-26 14:24:36.242974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.459 [2024-07-26 14:24:36.243006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:19.459 [2024-07-26 14:24:36.311579] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.459 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:19.717 null0 00:30:19.717 [2024-07-26 14:24:36.428740] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.717 [2024-07-26 14:24:36.452994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2635840 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2635840 /var/tmp/bperf.sock 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2635840 ']' 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:19.717 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:19.717 [2024-07-26 14:24:36.503251] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:19.718 [2024-07-26 14:24:36.503325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635840 ] 00:30:19.718 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.718 [2024-07-26 14:24:36.569639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.977 [2024-07-26 14:24:36.695348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.977 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:19.977 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:19.977 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:19.977 14:24:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:20.543 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:20.543 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.543 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.543 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.543 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.543 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.108 nvme0n1 00:30:21.109 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:21.109 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.109 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:21.109 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.109 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:21.109 14:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.367 Running I/O for 2 seconds... 00:30:21.367 [2024-07-26 14:24:38.063266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.063322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.063345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.078205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.078241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.078263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.090729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.090764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.090783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.104618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.104654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.104674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.119784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.119819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.119839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.132287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.132322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.132341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.146503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.146537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.146556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.160874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.160909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.160928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.172889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.172930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.172950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.187214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.187249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.187268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.200555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.200588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.200607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.216680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.216714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.216733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.367 [2024-07-26 14:24:38.230839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.367 [2024-07-26 14:24:38.230873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.367 [2024-07-26 14:24:38.230893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.368 [2024-07-26 14:24:38.244709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.368 [2024-07-26 14:24:38.244743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.368 [2024-07-26 14:24:38.244762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.256244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.256277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.256296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.273987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.274021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.274040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.287939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.287973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.287992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.299228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.299263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.299282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.314199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.314233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.314251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.326305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.326339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.326359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.340946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.340979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.340998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.358189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.358223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.358242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.370425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.370465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.370484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.386795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.386829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.386848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.398503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.398537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.398556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.413020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.413054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.413079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.426107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.426141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.426160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.438464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.438498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.438517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.454861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.454895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.454914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.466464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.466497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.466517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.481617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.481651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.481670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.494679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.494713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.494733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.626 [2024-07-26 14:24:38.508650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.626 [2024-07-26 14:24:38.508683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.626 [2024-07-26 14:24:38.508702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.885 [2024-07-26 14:24:38.521549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.885 [2024-07-26 14:24:38.521582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.885 [2024-07-26 14:24:38.521601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.885 [2024-07-26 14:24:38.536577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.536610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.536629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.553683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.553716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.553739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.567855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.567889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.567908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.580149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.580183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.580202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.592706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.592739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.592758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.609614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.609649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.609668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.621700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.621735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.621754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.635866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.635900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.635919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.648998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.649032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.649058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.663276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.663310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.663329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.676598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.676632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.676651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.691615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.691648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.691667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.706337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.706370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.706389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.718510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.718543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.718561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.733515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.733548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.733567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.748063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.748097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.748117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.886 [2024-07-26 14:24:38.760770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:21.886 [2024-07-26 14:24:38.760804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.886 [2024-07-26 14:24:38.760822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.774073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.774112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.774133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.788960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.788993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.789012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.802804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.802838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.802858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.814860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.814893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.814912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.829451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.829484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.829502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.843867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.843900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.843919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.855417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.855460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.855480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.871125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.871158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.871178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.883084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.883119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.883138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.897675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.897709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.897727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.910176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.910210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.910229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.925193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.925226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.925244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.940085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.940119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.940137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.952488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.952521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.952540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.966565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.966600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.966618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.979040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.979074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.979094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:38.995804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:38.995845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:38.995864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:39.008320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.145 [2024-07-26 14:24:39.008354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.145 [2024-07-26 14:24:39.008378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.145 [2024-07-26 14:24:39.022740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.146 [2024-07-26 14:24:39.022775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.146 [2024-07-26 14:24:39.022794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.035364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.035399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.035419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.048902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.048937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.048957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.065300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.065334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.065354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.077576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.077611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.077630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.094220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.094255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.094274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.107943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.107978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.107996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.120564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.120597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.120616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.136247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.136281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.136300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.148047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.148082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.148102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.163439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.163486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.163508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.177740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.177774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.177793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.192698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.192732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.192751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.206123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.206156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.206175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.218294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.218328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.218348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.232224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.232258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.232277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.247320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.247353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.247378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.259996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.404 [2024-07-26 14:24:39.260029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.404 [2024-07-26 14:24:39.260048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.404 [2024-07-26 14:24:39.274264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.405 [2024-07-26 14:24:39.274298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.405 [2024-07-26 14:24:39.274317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.405 [2024-07-26 14:24:39.288370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.405 [2024-07-26 14:24:39.288414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.405 [2024-07-26 14:24:39.288440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.663 [2024-07-26 14:24:39.301352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.663 [2024-07-26 14:24:39.301384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.663 [2024-07-26 14:24:39.301403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.663 [2024-07-26 14:24:39.314462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.663 [2024-07-26 14:24:39.314495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.663 [2024-07-26 14:24:39.314514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.663 [2024-07-26 14:24:39.328298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.663 [2024-07-26 14:24:39.328332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.663 [2024-07-26 14:24:39.328350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.663 [2024-07-26 14:24:39.343317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.663 [2024-07-26 14:24:39.343352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.663 [2024-07-26 14:24:39.343371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.663 [2024-07-26 14:24:39.355262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.663 [2024-07-26 14:24:39.355296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.355315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.370109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.370148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.370168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.383297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.383330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.383350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.397518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.397552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.397571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.409829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.409862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.409881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.423911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.423945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.423964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.438478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.438511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.438530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.451329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.451362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.451381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.464989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.465023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.465041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.480151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.480185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.480203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.491771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.491804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.491823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.507059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.507093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.507112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.522530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.522563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.522582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.536790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.536823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.536843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.664 [2024-07-26 14:24:39.549818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.664 [2024-07-26 14:24:39.549851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.664 [2024-07-26 14:24:39.549870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.563737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.563771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.563790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.575682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.575715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.575734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.590244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.590277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.590296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.604730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.604765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.604791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.617223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.617256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.617275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.632572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.632606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.632625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.645921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.645954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.645972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.658568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.658601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.658619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.673891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.673925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.673944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.687650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.687684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.687703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.702232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.702265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.702284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.717158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.717192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.717211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.728310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.728349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.728369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.742584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.742618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.742637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.756421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.756461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.756480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.770304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.770337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.770356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.782726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.782759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.782778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.923 [2024-07-26 14:24:39.797156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:22.923 [2024-07-26 14:24:39.797190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.923 [2024-07-26 14:24:39.797209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.811916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.811950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.182 [2024-07-26 14:24:39.811969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.823869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.823903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.182 [2024-07-26 14:24:39.823922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.839650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.839683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.182 [2024-07-26 14:24:39.839702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.853645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.853679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.182 [2024-07-26 14:24:39.853698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.866192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.866225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.182 [2024-07-26 14:24:39.866244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.880629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.880663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.182 [2024-07-26 14:24:39.880681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.893310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.893344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.182 [2024-07-26 14:24:39.893363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.906876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.906910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.182 [2024-07-26 14:24:39.906929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.182 [2024-07-26 14:24:39.919745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.182 [2024-07-26 14:24:39.919778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:39.919797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:39.934643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:39.934676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:39.934697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:39.948370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:39.948403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:39.948423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:39.962616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:39.962654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:39.962674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:39.973636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:39.973669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:39.973687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:39.991464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:39.991496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:39.991515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:40.005708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:40.005767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:40.005795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:40.018709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:40.018746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:40.018766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:40.034199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:40.034234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:40.034253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 [2024-07-26 14:24:40.049352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c952f0) 00:30:23.183 [2024-07-26 14:24:40.049388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.183 [2024-07-26 14:24:40.049407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.183 00:30:23.183 Latency(us) 00:30:23.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.183 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:23.183 nvme0n1 : 2.00 18388.71 71.83 0.00 0.00 6951.76 3665.16 19806.44 00:30:23.183 =================================================================================================================== 00:30:23.183 Total : 18388.71 71.83 0.00 0.00 6951.76 3665.16 19806.44 00:30:23.183 0 00:30:23.441 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:23.441 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:23.441 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:23.441 | .driver_specific 00:30:23.441 | .nvme_error 00:30:23.441 | .status_code 00:30:23.441 | .command_transient_transport_error' 00:30:23.441 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:23.699 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:30:23.699 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2635840 00:30:23.699 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2635840 ']' 00:30:23.699 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2635840 00:30:23.699 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:23.699 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.699 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2635840 00:30:23.957 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:23.957 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:23.957 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2635840' 00:30:23.957 killing process with pid 2635840 00:30:23.957 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2635840 00:30:23.957 Received shutdown signal, test time was about 2.000000 seconds 00:30:23.957 00:30:23.957 Latency(us) 00:30:23.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.957 =================================================================================================================== 00:30:23.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.957 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2635840 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2636372 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2636372 /var/tmp/bperf.sock 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2636372 ']' 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:24.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:24.215 14:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.215 [2024-07-26 14:24:40.940071] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:24.215 [2024-07-26 14:24:40.940172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636372 ] 00:30:24.215 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:24.215 Zero copy mechanism will not be used. 00:30:24.215 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.215 [2024-07-26 14:24:41.016258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.473 [2024-07-26 14:24:41.138714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.473 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:24.473 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:24.473 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.473 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:25.039 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:25.039 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.039 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:25.039 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.039 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.039 14:24:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:25.605 nvme0n1 00:30:25.605 14:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:25.605 14:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.605 14:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:25.605 14:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.605 14:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:25.605 14:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:25.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:25.891 Zero copy mechanism will not be used. 00:30:25.891 Running I/O for 2 seconds... 00:30:25.891 [2024-07-26 14:24:42.536540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.536601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.536625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.547545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.547585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.547606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.558416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.558461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.558491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.569387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.569421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.569448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.580372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.580405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.580424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.591731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.591775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.591795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.602655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.602689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.602709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.613339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.613375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.613395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.624674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.624709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.624729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.635446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.635481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.635500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.646686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.646721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.646750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.657398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.657443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.891 [2024-07-26 14:24:42.657466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.891 [2024-07-26 14:24:42.668195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.891 [2024-07-26 14:24:42.668230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.668251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.892 [2024-07-26 14:24:42.679968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.892 [2024-07-26 14:24:42.680004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.680025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.892 [2024-07-26 14:24:42.691984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.892 [2024-07-26 14:24:42.692024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.692045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.892 [2024-07-26 14:24:42.703403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.892 [2024-07-26 14:24:42.703458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.703479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.892 [2024-07-26 14:24:42.714574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.892 [2024-07-26 14:24:42.714611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.714631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.892 [2024-07-26 14:24:42.724448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.892 [2024-07-26 14:24:42.724482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.724501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.892 [2024-07-26 14:24:42.733978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.892 [2024-07-26 14:24:42.734012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.734031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.892 [2024-07-26 14:24:42.743455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.892 [2024-07-26 14:24:42.743488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.743508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.892 [2024-07-26 14:24:42.752596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:25.892 [2024-07-26 14:24:42.752630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.892 [2024-07-26 14:24:42.752649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.761663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.761696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.761715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.771790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.771833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.771852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.780391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.780424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.780450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.789678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.789718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.789738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.798931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.798965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.798984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.808451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.808483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.808503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.818898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.818932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.818959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.828261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.828294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.828313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.837980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.838013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.838032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.847138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.847172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.847191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.856164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.856197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.152 [2024-07-26 14:24:42.856216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.152 [2024-07-26 14:24:42.865240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.152 [2024-07-26 14:24:42.865272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.865291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.874469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.874502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.874521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.883393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.883437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.883458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.892454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.892487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.892506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.901391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.901438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.901460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.911297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.911341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.911361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.921105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.921139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.921158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.930492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.930526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.930544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.939931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.939964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.939983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.949004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.949036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.949055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.958141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.958173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.958193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.967176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.967208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.967227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.976282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.976315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.976333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.985317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.985350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.985369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:42.994145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:42.994178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:42.994196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:43.003504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:43.003537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:43.003556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:43.012916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:43.012949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:43.012968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:43.022293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:43.022331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.153 [2024-07-26 14:24:43.022350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.153 [2024-07-26 14:24:43.031939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.153 [2024-07-26 14:24:43.031971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.154 [2024-07-26 14:24:43.031990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.041156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.041189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.041207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.050005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.050038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.050057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.059068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.059105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.059125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.068096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.068128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.068147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.077092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.077124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.077143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.086102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.086139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.086159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.095388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.095439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.095459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.104686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.104719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.104738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.113924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.113957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.113976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.123275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.123308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.123326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.132450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.132490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.132509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.141832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.141874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.141893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.151042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.151074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.151092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.160047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.160078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.413 [2024-07-26 14:24:43.160097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.413 [2024-07-26 14:24:43.169127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.413 [2024-07-26 14:24:43.169159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.169178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.178062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.178094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.178113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.186970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.187004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.187022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.196234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.196265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.196284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.206376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.206410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.206439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.217387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.217422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.217455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.228485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.228519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.228538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.239213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.239248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.239268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.250677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.250712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.250731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.261235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.261270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.261290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.272108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.272143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.272163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.281846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.281881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.281901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.414 [2024-07-26 14:24:43.292623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.414 [2024-07-26 14:24:43.292658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.414 [2024-07-26 14:24:43.292678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.303352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.303389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.303409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.312979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.313020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.313040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.322777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.322811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.322830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.332676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.332709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.332728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.342553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.342587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.342606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.353255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.353290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.353309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.363385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.363419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.363450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.372982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.373016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.373035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.383391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.383426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.383455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.394602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.394639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.394659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.405455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.405489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.405509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.415300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.673 [2024-07-26 14:24:43.415335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.673 [2024-07-26 14:24:43.415354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.673 [2024-07-26 14:24:43.426686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.426721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.426740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.438056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.438091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.438110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.449351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.449386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.449406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.460535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.460571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.460590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.471250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.471284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.471304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.481861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.481895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.481915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.493465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.493507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.493528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.504586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.504620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.504640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.517775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.517811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.517831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.529661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.529700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.529720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.541605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.541642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.541662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.674 [2024-07-26 14:24:43.554595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.674 [2024-07-26 14:24:43.554632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.674 [2024-07-26 14:24:43.554652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.567854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.567889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.567909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.581287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.581322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.581342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.593452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.593486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.593506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.605725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.605760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.605779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.617067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.617103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.617122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.629178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.629225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.629244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.642204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.642251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.642270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.654235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.654276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.654295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.667236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.667277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.667296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.679847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.679890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.679908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.693149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.693185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.693205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.706102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.706137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.706164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.718612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.932 [2024-07-26 14:24:43.718648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.932 [2024-07-26 14:24:43.718668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.932 [2024-07-26 14:24:43.731832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.933 [2024-07-26 14:24:43.731878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.933 [2024-07-26 14:24:43.731898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.933 [2024-07-26 14:24:43.745751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.933 [2024-07-26 14:24:43.745787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.933 [2024-07-26 14:24:43.745806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.933 [2024-07-26 14:24:43.758810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.933 [2024-07-26 14:24:43.758846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.933 [2024-07-26 14:24:43.758866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.933 [2024-07-26 14:24:43.770794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.933 [2024-07-26 14:24:43.770841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.933 [2024-07-26 14:24:43.770861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.933 [2024-07-26 14:24:43.782536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.933 [2024-07-26 14:24:43.782571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.933 [2024-07-26 14:24:43.782591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.933 [2024-07-26 14:24:43.793396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.933 [2024-07-26 14:24:43.793437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.933 [2024-07-26 14:24:43.793458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.933 [2024-07-26 14:24:43.803782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.933 [2024-07-26 14:24:43.803814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.933 [2024-07-26 14:24:43.803833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.933 [2024-07-26 14:24:43.814318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:26.933 [2024-07-26 14:24:43.814358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.933 [2024-07-26 14:24:43.814378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.824190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.824224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.824243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.833842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.833874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.833893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.843464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.843514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.843533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.853171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.853213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.853232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.863779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.863825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.863844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.875094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.875129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.875149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.888274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.888317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.888337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.900276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.900311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.900337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.913541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.913580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.913600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.927376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.927421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.927449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.940033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.940067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.940087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.952004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.952039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.952059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.963989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.964022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.964041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.975845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.975879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.975907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:43.989342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:43.989378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:43.989398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:44.000767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:44.000801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:44.000821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:44.011904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:44.011945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:44.011965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:44.022968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:44.023001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:44.023020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:44.033912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:44.033947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:44.033966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:44.045762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:44.045795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:44.045814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:44.056783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.191 [2024-07-26 14:24:44.056815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.191 [2024-07-26 14:24:44.056834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.191 [2024-07-26 14:24:44.068301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.192 [2024-07-26 14:24:44.068335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.192 [2024-07-26 14:24:44.068354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.449 [2024-07-26 14:24:44.079335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.449 [2024-07-26 14:24:44.079373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.449 [2024-07-26 14:24:44.079391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.090690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.090722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.090740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.101815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.101848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.101867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.112832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.112876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.112897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.123686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.123719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.123738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.134426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.134471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.134490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.144547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.144580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.144599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.154210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.154243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.154262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.164637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.164669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.164688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.174668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.174701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.174720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.184680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.184713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.184732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.194982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.195028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.195054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.205105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.205138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.205157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.215281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.215314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.215333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.224963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.224996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.225015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.234848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.234880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.234899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.244636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.244674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.244692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.255245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.255278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.255296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.266609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.266642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.266661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.276828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.276861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.276880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.286869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.286908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.286928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.296903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.296936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.296955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.307011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.307044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.307063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.316713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.316745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.316764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.450 [2024-07-26 14:24:44.326436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.450 [2024-07-26 14:24:44.326468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.450 [2024-07-26 14:24:44.326486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.336331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.336364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.336382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.346518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.346550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.346569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.356096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.356129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.356147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.366296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.366339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.366364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.376399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.376440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.376461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.386532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.386565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.386584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.396298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.396331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.396350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.406588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.406625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.406643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.416919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.416952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.416970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.426701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.426733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.426751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.436333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.436366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.436384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.446365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.446398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.446417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.709 [2024-07-26 14:24:44.456175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.709 [2024-07-26 14:24:44.456212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.709 [2024-07-26 14:24:44.456232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.710 [2024-07-26 14:24:44.466403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.710 [2024-07-26 14:24:44.466446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.710 [2024-07-26 14:24:44.466467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.710 [2024-07-26 14:24:44.476481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.710 [2024-07-26 14:24:44.476513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.710 [2024-07-26 14:24:44.476532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.710 [2024-07-26 14:24:44.486279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.710 [2024-07-26 14:24:44.486311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.710 [2024-07-26 14:24:44.486330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.710 [2024-07-26 14:24:44.496792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.710 [2024-07-26 14:24:44.496824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.710 [2024-07-26 14:24:44.496844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:27.710 [2024-07-26 14:24:44.507187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.710 [2024-07-26 14:24:44.507220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.710 [2024-07-26 14:24:44.507239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.710 [2024-07-26 14:24:44.516948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.710 [2024-07-26 14:24:44.516980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.710 [2024-07-26 14:24:44.516998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:27.710 [2024-07-26 14:24:44.526558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b88ec0) 00:30:27.710 [2024-07-26 14:24:44.526591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.710 [2024-07-26 14:24:44.526609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:27.710 00:30:27.710 Latency(us) 00:30:27.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.710 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:27.710 nvme0n1 : 2.01 2937.29 367.16 0.00 0.00 5441.04 4344.79 14951.92 00:30:27.710 =================================================================================================================== 00:30:27.710 Total : 2937.29 367.16 0.00 0.00 5441.04 4344.79 14951.92 00:30:27.710 0 00:30:27.710 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:27.710 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:27.710 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:27.710 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:27.710 | .driver_specific 00:30:27.710 | .nvme_error 00:30:27.710 | .status_code 00:30:27.710 | .command_transient_transport_error' 00:30:27.967 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 190 > 0 )) 00:30:27.967 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2636372 00:30:27.967 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2636372 ']' 00:30:27.967 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2636372 00:30:27.967 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:27.967 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.968 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2636372 00:30:28.225 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:28.225 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:28.225 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2636372' 00:30:28.225 killing process with pid 2636372 00:30:28.225 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2636372 00:30:28.225 Received shutdown signal, test time was about 2.000000 seconds 00:30:28.225 00:30:28.225 Latency(us) 00:30:28.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.225 =================================================================================================================== 00:30:28.225 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.225 14:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2636372 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2636911 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2636911 /var/tmp/bperf.sock 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2636911 ']' 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:28.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:28.483 14:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:28.483 [2024-07-26 14:24:45.181959] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:28.483 [2024-07-26 14:24:45.182044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636911 ] 00:30:28.483 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.483 [2024-07-26 14:24:45.248229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.483 [2024-07-26 14:24:45.368443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.417 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.417 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:29.417 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:29.417 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:29.674 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:29.674 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.674 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:29.674 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.674 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.674 14:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:30.240 nvme0n1 00:30:30.240 14:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:30.240 14:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.240 14:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:30.240 14:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.240 14:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:30.240 14:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:30.498 Running I/O for 2 seconds... 00:30:30.498 [2024-07-26 14:24:47.176225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ed920 00:30:30.498 [2024-07-26 14:24:47.177525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.177566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:30.498 [2024-07-26 14:24:47.190233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fef90 00:30:30.498 [2024-07-26 14:24:47.191583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.191616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:30.498 [2024-07-26 14:24:47.204058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ee190 00:30:30.498 [2024-07-26 14:24:47.205571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.205605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:30.498 [2024-07-26 14:24:47.217779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fe720 00:30:30.498 [2024-07-26 14:24:47.219493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.219525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.498 [2024-07-26 14:24:47.228896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0ff8 00:30:30.498 [2024-07-26 14:24:47.229692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.229725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:30.498 [2024-07-26 14:24:47.242593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e4de8 00:30:30.498 [2024-07-26 14:24:47.243600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.243632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:30.498 [2024-07-26 14:24:47.256203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e84c0 00:30:30.498 [2024-07-26 14:24:47.257386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.257418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:30.498 [2024-07-26 14:24:47.268294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f8618 00:30:30.498 [2024-07-26 14:24:47.270387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.270419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.498 [2024-07-26 14:24:47.279713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ebb98 00:30:30.498 [2024-07-26 14:24:47.280704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.498 [2024-07-26 14:24:47.280736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:30.499 [2024-07-26 14:24:47.293376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f9b30 00:30:30.499 [2024-07-26 14:24:47.294528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.499 [2024-07-26 14:24:47.294559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:30.499 [2024-07-26 14:24:47.307836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fc560 00:30:30.499 [2024-07-26 14:24:47.309192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.499 [2024-07-26 14:24:47.309225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:30.499 [2024-07-26 14:24:47.321213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ed4e8 00:30:30.499 [2024-07-26 14:24:47.322701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.499 [2024-07-26 14:24:47.322732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:30.499 [2024-07-26 14:24:47.334835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fcdd0 00:30:30.499 [2024-07-26 14:24:47.336523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.499 [2024-07-26 14:24:47.336554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.499 [2024-07-26 14:24:47.345836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0bc0 00:30:30.499 [2024-07-26 14:24:47.346625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.499 [2024-07-26 14:24:47.346656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:30.499 [2024-07-26 14:24:47.359465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e3060 00:30:30.499 [2024-07-26 14:24:47.360450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.499 [2024-07-26 14:24:47.360489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:30.499 [2024-07-26 14:24:47.373066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e01f8 00:30:30.499 [2024-07-26 14:24:47.374243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.499 [2024-07-26 14:24:47.374274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:30.757 [2024-07-26 14:24:47.385312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fac10 00:30:30.757 [2024-07-26 14:24:47.387158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.757 [2024-07-26 14:24:47.387189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.757 [2024-07-26 14:24:47.396552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190efae0 00:30:30.757 [2024-07-26 14:24:47.397524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.757 [2024-07-26 14:24:47.397556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:30.757 [2024-07-26 14:24:47.410186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f96f8 00:30:30.757 [2024-07-26 14:24:47.411354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.757 [2024-07-26 14:24:47.411391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:30.757 [2024-07-26 14:24:47.424684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6fa8 00:30:30.757 [2024-07-26 14:24:47.426026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.757 [2024-07-26 14:24:47.426057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:30.757 [2024-07-26 14:24:47.438104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ee190 00:30:30.757 [2024-07-26 14:24:47.439649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.757 [2024-07-26 14:24:47.439680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:30.757 [2024-07-26 14:24:47.450408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e4578 00:30:30.757 [2024-07-26 14:24:47.451924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.757 [2024-07-26 14:24:47.451954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:30.757 [2024-07-26 14:24:47.464017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e27f0 00:30:30.758 [2024-07-26 14:24:47.465706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.465737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.477663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190eaab8 00:30:30.758 [2024-07-26 14:24:47.479519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.479550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.489826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fac10 00:30:30.758 [2024-07-26 14:24:47.491166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.491197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.503013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f4298 00:30:30.758 [2024-07-26 14:24:47.504204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.504235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.515278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f2948 00:30:30.758 [2024-07-26 14:24:47.517181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.517212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.526456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ecc78 00:30:30.758 [2024-07-26 14:24:47.527441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.527472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.540082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fa3a0 00:30:30.758 [2024-07-26 14:24:47.541245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.541275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.553716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f2510 00:30:30.758 [2024-07-26 14:24:47.555051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.555081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.568183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f8a50 00:30:30.758 [2024-07-26 14:24:47.569705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.569736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.581681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e5220 00:30:30.758 [2024-07-26 14:24:47.583449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.583480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.594808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e84c0 00:30:30.758 [2024-07-26 14:24:47.596577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.596608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.607876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6b70 00:30:30.758 [2024-07-26 14:24:47.609655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.609696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.620924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e5220 00:30:30.758 [2024-07-26 14:24:47.622707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.622737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:30.758 [2024-07-26 14:24:47.633985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e84c0 00:30:30.758 [2024-07-26 14:24:47.635759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.758 [2024-07-26 14:24:47.635791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.647072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6b70 00:30:31.016 [2024-07-26 14:24:47.648843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.648874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.660132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e5220 00:30:31.016 [2024-07-26 14:24:47.661915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.661946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.673209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e84c0 00:30:31.016 [2024-07-26 14:24:47.674980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.675012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.686252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6b70 00:30:31.016 [2024-07-26 14:24:47.688031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.688062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.699353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e5220 00:30:31.016 [2024-07-26 14:24:47.701144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.701175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.712424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e84c0 00:30:31.016 [2024-07-26 14:24:47.714200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.714231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.725527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6b70 00:30:31.016 [2024-07-26 14:24:47.727292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.727323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.738597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e5220 00:30:31.016 [2024-07-26 14:24:47.740370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.740401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.749753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0350 00:30:31.016 [2024-07-26 14:24:47.750607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.750643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.762673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e0ea0 00:30:31.016 [2024-07-26 14:24:47.763525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.763556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.775705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6fa8 00:30:31.016 [2024-07-26 14:24:47.776543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.776574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.788765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0350 00:30:31.016 [2024-07-26 14:24:47.789660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.789691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.801791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e0ea0 00:30:31.016 [2024-07-26 14:24:47.802650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.802681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.016 [2024-07-26 14:24:47.814887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6fa8 00:30:31.016 [2024-07-26 14:24:47.815759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.016 [2024-07-26 14:24:47.815790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.017 [2024-07-26 14:24:47.827927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0350 00:30:31.017 [2024-07-26 14:24:47.828765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.017 [2024-07-26 14:24:47.828795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.017 [2024-07-26 14:24:47.840977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e0ea0 00:30:31.017 [2024-07-26 14:24:47.841808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.017 [2024-07-26 14:24:47.841838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.017 [2024-07-26 14:24:47.854016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6fa8 00:30:31.017 [2024-07-26 14:24:47.854871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.017 [2024-07-26 14:24:47.854901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.017 [2024-07-26 14:24:47.867103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0350 00:30:31.017 [2024-07-26 14:24:47.867972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.017 [2024-07-26 14:24:47.868003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.017 [2024-07-26 14:24:47.880179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e0ea0 00:30:31.017 [2024-07-26 14:24:47.881097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.017 [2024-07-26 14:24:47.881128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.017 [2024-07-26 14:24:47.893257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6fa8 00:30:31.017 [2024-07-26 14:24:47.894209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.017 [2024-07-26 14:24:47.894240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.275 [2024-07-26 14:24:47.906288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0350 00:30:31.275 [2024-07-26 14:24:47.907234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.275 [2024-07-26 14:24:47.907265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.275 [2024-07-26 14:24:47.919321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e0ea0 00:30:31.276 [2024-07-26 14:24:47.920155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:47.920185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:47.932344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6fa8 00:30:31.276 [2024-07-26 14:24:47.933266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:47.933296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:47.945425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0350 00:30:31.276 [2024-07-26 14:24:47.946280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:47.946310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:47.958710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e9e10 00:30:31.276 [2024-07-26 14:24:47.959957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:47.959987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:47.971777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e9168 00:30:31.276 [2024-07-26 14:24:47.973016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:47.973047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:47.984815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e9e10 00:30:31.276 [2024-07-26 14:24:47.986066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:47.986104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:47.997858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e9168 00:30:31.276 [2024-07-26 14:24:47.999104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:47.999135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.010948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e9e10 00:30:31.276 [2024-07-26 14:24:48.012182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.012213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.024213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e4140 00:30:31.276 [2024-07-26 14:24:48.025476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.025506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.037355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f20d8 00:30:31.276 [2024-07-26 14:24:48.038610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.038641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.050350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f81e0 00:30:31.276 [2024-07-26 14:24:48.051632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.051663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.063357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e8d30 00:30:31.276 [2024-07-26 14:24:48.064646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.064676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.076343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e2c28 00:30:31.276 [2024-07-26 14:24:48.077626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.077655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.089542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190feb58 00:30:31.276 [2024-07-26 14:24:48.090825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.090860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.102578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e12d8 00:30:31.276 [2024-07-26 14:24:48.103840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.103870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.115602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190dece0 00:30:31.276 [2024-07-26 14:24:48.116886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.116917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.128603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e9168 00:30:31.276 [2024-07-26 14:24:48.129879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.129909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.141562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190df550 00:30:31.276 [2024-07-26 14:24:48.142806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.142836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.276 [2024-07-26 14:24:48.154896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f6cc8 00:30:31.276 [2024-07-26 14:24:48.155918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.276 [2024-07-26 14:24:48.155949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.534 [2024-07-26 14:24:48.167872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f35f0 00:30:31.534 [2024-07-26 14:24:48.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.534 [2024-07-26 14:24:48.168825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.534 [2024-07-26 14:24:48.181738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ed0b0 00:30:31.534 [2024-07-26 14:24:48.182918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.534 [2024-07-26 14:24:48.182949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:31.534 [2024-07-26 14:24:48.193997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f2948 00:30:31.534 [2024-07-26 14:24:48.196056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.196087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.205460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e1710 00:30:31.535 [2024-07-26 14:24:48.206439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.206480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.220029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ec840 00:30:31.535 [2024-07-26 14:24:48.221201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.221232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.233492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fe720 00:30:31.535 [2024-07-26 14:24:48.234842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.234873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.245873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fb048 00:30:31.535 [2024-07-26 14:24:48.247277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.247308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.260409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ee5c8 00:30:31.535 [2024-07-26 14:24:48.262022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.262054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.273422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e84c0 00:30:31.535 [2024-07-26 14:24:48.274950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.274981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.286418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fac10 00:30:31.535 [2024-07-26 14:24:48.287994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.288025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.299473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6b70 00:30:31.535 [2024-07-26 14:24:48.300976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.301007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.312930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e0ea0 00:30:31.535 [2024-07-26 14:24:48.314587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.314618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.324009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f5378 00:30:31.535 [2024-07-26 14:24:48.324860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.324893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.337647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fc128 00:30:31.535 [2024-07-26 14:24:48.338572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.338604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.350897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190edd58 00:30:31.535 [2024-07-26 14:24:48.352350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.352381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.364009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190df550 00:30:31.535 [2024-07-26 14:24:48.365418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.365456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.377010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e9168 00:30:31.535 [2024-07-26 14:24:48.378425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.378462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.390014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190dece0 00:30:31.535 [2024-07-26 14:24:48.391366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.391397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.403057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e4140 00:30:31.535 [2024-07-26 14:24:48.404471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.404503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.535 [2024-07-26 14:24:48.416047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ee5c8 00:30:31.535 [2024-07-26 14:24:48.417453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.535 [2024-07-26 14:24:48.417483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.793 [2024-07-26 14:24:48.429025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f7100 00:30:31.793 [2024-07-26 14:24:48.430442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.793 [2024-07-26 14:24:48.430478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.793 [2024-07-26 14:24:48.442045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fb048 00:30:31.793 [2024-07-26 14:24:48.443446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.793 [2024-07-26 14:24:48.443477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.793 [2024-07-26 14:24:48.455046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ff3c8 00:30:31.793 [2024-07-26 14:24:48.456447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.793 [2024-07-26 14:24:48.456476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.793 [2024-07-26 14:24:48.468060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e8088 00:30:31.794 [2024-07-26 14:24:48.469493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.469523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.481095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e0a68 00:30:31.794 [2024-07-26 14:24:48.482527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.482557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.494159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e3498 00:30:31.794 [2024-07-26 14:24:48.495552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.495583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.507210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6b70 00:30:31.794 [2024-07-26 14:24:48.508615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.508645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.520207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fac10 00:30:31.794 [2024-07-26 14:24:48.521530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.521561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.533174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f2d80 00:30:31.794 [2024-07-26 14:24:48.534508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.534538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.546126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f57b0 00:30:31.794 [2024-07-26 14:24:48.547446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.547476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.559539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f1430 00:30:31.794 [2024-07-26 14:24:48.560674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.560704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.571775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190dece0 00:30:31.794 [2024-07-26 14:24:48.573724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.573754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.583040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e73e0 00:30:31.794 [2024-07-26 14:24:48.584053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.584083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.596708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e38d0 00:30:31.794 [2024-07-26 14:24:48.597936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.597967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.611214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ebb98 00:30:31.794 [2024-07-26 14:24:48.612558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.612589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.624681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fac10 00:30:31.794 [2024-07-26 14:24:48.626275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.626305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.637982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f2d80 00:30:31.794 [2024-07-26 14:24:48.639592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.639623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.651064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f4b08 00:30:31.794 [2024-07-26 14:24:48.652692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.652722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.664171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e5220 00:30:31.794 [2024-07-26 14:24:48.665798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.665828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:31.794 [2024-07-26 14:24:48.677235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6300 00:30:31.794 [2024-07-26 14:24:48.678843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:31.794 [2024-07-26 14:24:48.678874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.053 [2024-07-26 14:24:48.690294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f5378 00:30:32.053 [2024-07-26 14:24:48.691915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.053 [2024-07-26 14:24:48.691945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.053 [2024-07-26 14:24:48.703362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e49b0 00:30:32.053 [2024-07-26 14:24:48.705000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.053 [2024-07-26 14:24:48.705031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.053 [2024-07-26 14:24:48.716444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e99d8 00:30:32.053 [2024-07-26 14:24:48.718055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.053 [2024-07-26 14:24:48.718085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.053 [2024-07-26 14:24:48.729488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190dfdc0 00:30:32.053 [2024-07-26 14:24:48.731093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.053 [2024-07-26 14:24:48.731124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.053 [2024-07-26 14:24:48.742616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ec840 00:30:32.053 [2024-07-26 14:24:48.744241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.053 [2024-07-26 14:24:48.744272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.053 [2024-07-26 14:24:48.755664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e7c50 00:30:32.053 [2024-07-26 14:24:48.757286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.757317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.768727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e3d08 00:30:32.054 [2024-07-26 14:24:48.770342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.770380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.781767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fe720 00:30:32.054 [2024-07-26 14:24:48.783397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.783434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.794801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e1b48 00:30:32.054 [2024-07-26 14:24:48.796406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.796441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.807876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f81e0 00:30:32.054 [2024-07-26 14:24:48.809491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.809522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.820899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190eee38 00:30:32.054 [2024-07-26 14:24:48.822499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.822531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.833930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fb8b8 00:30:32.054 [2024-07-26 14:24:48.835521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.835552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.846947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f0350 00:30:32.054 [2024-07-26 14:24:48.848568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.848599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.859984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e4140 00:30:32.054 [2024-07-26 14:24:48.861607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.861638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.873029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ee5c8 00:30:32.054 [2024-07-26 14:24:48.874642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.874672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.886039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190ee190 00:30:32.054 [2024-07-26 14:24:48.887664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.887694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.899101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e5658 00:30:32.054 [2024-07-26 14:24:48.900715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.900746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.912123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190de470 00:30:32.054 [2024-07-26 14:24:48.913744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.913775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.925123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fda78 00:30:32.054 [2024-07-26 14:24:48.926741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.926770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.054 [2024-07-26 14:24:48.938165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e95a0 00:30:32.054 [2024-07-26 14:24:48.939656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.054 [2024-07-26 14:24:48.939687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:48.951227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190df988 00:30:32.313 [2024-07-26 14:24:48.952854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:48.952885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:48.964270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f96f8 00:30:32.313 [2024-07-26 14:24:48.965893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:48.965923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:48.977286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e27f0 00:30:32.313 [2024-07-26 14:24:48.978911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:48.978941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:48.990313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e23b8 00:30:32.313 [2024-07-26 14:24:48.991967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:48.991997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.003464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fc998 00:30:32.313 [2024-07-26 14:24:49.005082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.005113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.016524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fe2e8 00:30:32.313 [2024-07-26 14:24:49.018135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.018165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.029581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e3498 00:30:32.313 [2024-07-26 14:24:49.031190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.031220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.042656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6b70 00:30:32.313 [2024-07-26 14:24:49.044272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.044301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.055658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190fac10 00:30:32.313 [2024-07-26 14:24:49.057279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.057309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.068702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f2d80 00:30:32.313 [2024-07-26 14:24:49.070312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.070342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.081719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f4b08 00:30:32.313 [2024-07-26 14:24:49.083335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.083365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.094754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e5220 00:30:32.313 [2024-07-26 14:24:49.096363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.096394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.108036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e6300 00:30:32.313 [2024-07-26 14:24:49.109633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.109670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.121067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190f5378 00:30:32.313 [2024-07-26 14:24:49.122691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.122721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.134082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e49b0 00:30:32.313 [2024-07-26 14:24:49.135707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.135737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.147090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190e99d8 00:30:32.313 [2024-07-26 14:24:49.148709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.148739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 [2024-07-26 14:24:49.160144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51b90) with pdu=0x2000190dfdc0 00:30:32.313 [2024-07-26 14:24:49.161660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.313 [2024-07-26 14:24:49.161692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.313 00:30:32.313 Latency(us) 00:30:32.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.313 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.313 nvme0n1 : 2.01 19502.45 76.18 0.00 0.00 6552.23 2633.58 17767.54 00:30:32.313 =================================================================================================================== 00:30:32.313 Total : 19502.45 76.18 0.00 0.00 6552.23 2633.58 17767.54 00:30:32.313 0 00:30:32.313 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:32.313 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:32.313 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:32.313 | .driver_specific 00:30:32.313 | .nvme_error 00:30:32.313 | .status_code 00:30:32.313 | .command_transient_transport_error' 00:30:32.313 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:32.572 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:30:32.572 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2636911 00:30:32.572 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2636911 ']' 00:30:32.572 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2636911 00:30:32.572 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:32.572 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:32.572 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2636911 00:30:32.830 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:32.830 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:32.830 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2636911' 00:30:32.830 killing process with pid 2636911 00:30:32.830 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2636911 00:30:32.830 Received shutdown signal, test time was about 2.000000 seconds 00:30:32.830 00:30:32.830 Latency(us) 00:30:32.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.830 =================================================================================================================== 00:30:32.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.830 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2636911 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2637443 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2637443 /var/tmp/bperf.sock 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2637443 ']' 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:33.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:33.089 14:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:33.089 [2024-07-26 14:24:49.859602] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:33.089 [2024-07-26 14:24:49.859725] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637443 ] 00:30:33.089 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:33.089 Zero copy mechanism will not be used. 00:30:33.089 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.089 [2024-07-26 14:24:49.960260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.348 [2024-07-26 14:24:50.094926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.281 14:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.281 14:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:34.281 14:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:34.281 14:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:34.539 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:34.539 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.539 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.539 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.539 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.539 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.797 nvme0n1 00:30:34.797 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:34.797 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.797 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.797 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.797 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:34.797 14:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:35.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:35.055 Zero copy mechanism will not be used. 00:30:35.055 Running I/O for 2 seconds... 00:30:35.055 [2024-07-26 14:24:51.831702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.055 [2024-07-26 14:24:51.832146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.055 [2024-07-26 14:24:51.832189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.055 [2024-07-26 14:24:51.848257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.055 [2024-07-26 14:24:51.848556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.055 [2024-07-26 14:24:51.848590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.055 [2024-07-26 14:24:51.863303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.055 [2024-07-26 14:24:51.863694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.055 [2024-07-26 14:24:51.863729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.055 [2024-07-26 14:24:51.878714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.055 [2024-07-26 14:24:51.879143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.055 [2024-07-26 14:24:51.879176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.055 [2024-07-26 14:24:51.898349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.055 [2024-07-26 14:24:51.898809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.055 [2024-07-26 14:24:51.898842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.055 [2024-07-26 14:24:51.917748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.055 [2024-07-26 14:24:51.918182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.056 [2024-07-26 14:24:51.918215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.056 [2024-07-26 14:24:51.936595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.056 [2024-07-26 14:24:51.937103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.056 [2024-07-26 14:24:51.937136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:51.956445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:51.956870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:51.956903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:51.977011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:51.977522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:51.977555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:51.995583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:51.996099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:51.996132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.010493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.010894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.010927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.020556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.020925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.020958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.030357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.030739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.030773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.039955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.040354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.040389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.049299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.049713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.049751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.062061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.062438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.062477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.073323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.073743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.073776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.082601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.082936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.082968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.091463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.091871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.091902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.100889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.101255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.101287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.111037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.111434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.111466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.121250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.121664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.121705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.133028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.133405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.133445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.146627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.147030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.147062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.164133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.164570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.164603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.177175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.177847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.177880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.315 [2024-07-26 14:24:52.191643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.315 [2024-07-26 14:24:52.192043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.315 [2024-07-26 14:24:52.192075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.574 [2024-07-26 14:24:52.206202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.574 [2024-07-26 14:24:52.206813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.574 [2024-07-26 14:24:52.206846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.574 [2024-07-26 14:24:52.222068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.574 [2024-07-26 14:24:52.222418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.574 [2024-07-26 14:24:52.222459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.574 [2024-07-26 14:24:52.239167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.574 [2024-07-26 14:24:52.239792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.574 [2024-07-26 14:24:52.239824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.574 [2024-07-26 14:24:52.256056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.574 [2024-07-26 14:24:52.256689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.574 [2024-07-26 14:24:52.256721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.574 [2024-07-26 14:24:52.275348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.574 [2024-07-26 14:24:52.276060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.574 [2024-07-26 14:24:52.276093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.574 [2024-07-26 14:24:52.295030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.574 [2024-07-26 14:24:52.295576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.574 [2024-07-26 14:24:52.295608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.574 [2024-07-26 14:24:52.312297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.575 [2024-07-26 14:24:52.312808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.575 [2024-07-26 14:24:52.312841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.575 [2024-07-26 14:24:52.331543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.575 [2024-07-26 14:24:52.332164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.575 [2024-07-26 14:24:52.332196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.575 [2024-07-26 14:24:52.350952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.575 [2024-07-26 14:24:52.351683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.575 [2024-07-26 14:24:52.351717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.575 [2024-07-26 14:24:52.368451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.575 [2024-07-26 14:24:52.369151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.575 [2024-07-26 14:24:52.369183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.575 [2024-07-26 14:24:52.387562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.575 [2024-07-26 14:24:52.388116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.575 [2024-07-26 14:24:52.388148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.575 [2024-07-26 14:24:52.405492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.575 [2024-07-26 14:24:52.406201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.575 [2024-07-26 14:24:52.406243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.575 [2024-07-26 14:24:52.423780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.575 [2024-07-26 14:24:52.424497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.575 [2024-07-26 14:24:52.424530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.575 [2024-07-26 14:24:52.443783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.575 [2024-07-26 14:24:52.444550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.575 [2024-07-26 14:24:52.444582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.461633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.462176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.462209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.480447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.481152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.481184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.500952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.501607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.501640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.520698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.521374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.521406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.538377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.539001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.539034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.555668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.556188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.556220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.575033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.575663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.575696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.593921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.594589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.594621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.611821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.612544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.612576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.628212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.628686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.628718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.646973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.647602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.647634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.664053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.664873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.664905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:35.833 [2024-07-26 14:24:52.685816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.833 [2024-07-26 14:24:52.686359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.833 [2024-07-26 14:24:52.686391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:35.834 [2024-07-26 14:24:52.705374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:35.834 [2024-07-26 14:24:52.706014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.834 [2024-07-26 14:24:52.706047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.723394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.724037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.724070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.742641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.743358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.743390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.760969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.761627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.761659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.779371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.779919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.779951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.796451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.797155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.797188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.816006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.816654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.816687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.835288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.836005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.836037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.852890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.853309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.853341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.865579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.866113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.866146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.877864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.878543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.878582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.890102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.890468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.890501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.902860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.903247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.903281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.914452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.092 [2024-07-26 14:24:52.914987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.092 [2024-07-26 14:24:52.915020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.092 [2024-07-26 14:24:52.926977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.093 [2024-07-26 14:24:52.927419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.093 [2024-07-26 14:24:52.927474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.093 [2024-07-26 14:24:52.940369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.093 [2024-07-26 14:24:52.940815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.093 [2024-07-26 14:24:52.940848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.093 [2024-07-26 14:24:52.954281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.093 [2024-07-26 14:24:52.954762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.093 [2024-07-26 14:24:52.954795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.093 [2024-07-26 14:24:52.968862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.093 [2024-07-26 14:24:52.969259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.093 [2024-07-26 14:24:52.969291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:52.981921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:52.982311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:52.982342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:52.993681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:52.994071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:52.994103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.006786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.007256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.007288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.020414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.020772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.032802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.033293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.033325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.047807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.048364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.048396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.064222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.064684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.064717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.081077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.081665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.081697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.094606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.095214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.095246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.109515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.109917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.109949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.124104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.124593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.124626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.140200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.140757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.140789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.156375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.157021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.157054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.167603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.168040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.168073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.176844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.177220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.177252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.185020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.185314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.185346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.192966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.193257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.193289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.201283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.201584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.201616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.209582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.209877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.209917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.221384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.221868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.221907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.352 [2024-07-26 14:24:53.234677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.352 [2024-07-26 14:24:53.234962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.352 [2024-07-26 14:24:53.234994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.247924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.248342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.248374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.258092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.258545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.258577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.267590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.267928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.267960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.276052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.276339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.276371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.284627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.284912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.284944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.295111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.295719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.295751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.309830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.310414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.310453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.321033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.321330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.321362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.329123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.329519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.329551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.337865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.338232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.338264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.352646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.353240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.353272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.367136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.367851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.367883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.386701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.387407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.387446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.403766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.404572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.404605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.418299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.418715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.418756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.434973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.435480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.435512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.451384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.451950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.611 [2024-07-26 14:24:53.451983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.611 [2024-07-26 14:24:53.470327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.611 [2024-07-26 14:24:53.470918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.612 [2024-07-26 14:24:53.470951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.612 [2024-07-26 14:24:53.488579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.612 [2024-07-26 14:24:53.489179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.612 [2024-07-26 14:24:53.489211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.507154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.507777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.507809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.526900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.527831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.527863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.542815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.543253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.543285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.553323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.553802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.553834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.564451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.564883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.564914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.582826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.583553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.583585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.601533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.602279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.602311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.621440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.622269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.622301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.641073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.641568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.641601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.659004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.659640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.659672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.678835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.679384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.679416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.696225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.696765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.696797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.714896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.715538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.715571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.731384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.732069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.732101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:36.870 [2024-07-26 14:24:53.748255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:36.870 [2024-07-26 14:24:53.748881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.870 [2024-07-26 14:24:53.748914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.129 [2024-07-26 14:24:53.768152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:37.129 [2024-07-26 14:24:53.768700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.129 [2024-07-26 14:24:53.768732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.129 [2024-07-26 14:24:53.786347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:37.129 [2024-07-26 14:24:53.787113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.129 [2024-07-26 14:24:53.787145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.129 [2024-07-26 14:24:53.801966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b51d30) with pdu=0x2000190fef90 00:30:37.129 [2024-07-26 14:24:53.802718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.129 [2024-07-26 14:24:53.802750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.129 00:30:37.129 Latency(us) 00:30:37.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.129 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:37.129 nvme0n1 : 2.01 2018.59 252.32 0.00 0.00 7902.32 3665.16 21845.33 00:30:37.129 =================================================================================================================== 00:30:37.129 Total : 2018.59 252.32 0.00 0.00 7902.32 3665.16 21845.33 00:30:37.129 0 00:30:37.129 14:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:37.129 14:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:37.129 14:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:37.129 | .driver_specific 00:30:37.129 | .nvme_error 00:30:37.129 | .status_code 00:30:37.129 | .command_transient_transport_error' 00:30:37.129 14:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 130 > 0 )) 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2637443 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2637443 ']' 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2637443 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2637443 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2637443' 00:30:37.387 killing process with pid 2637443 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2637443 00:30:37.387 Received shutdown signal, test time was about 2.000000 seconds 00:30:37.387 00:30:37.387 Latency(us) 00:30:37.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.387 =================================================================================================================== 00:30:37.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:37.387 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2637443 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2635815 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2635815 ']' 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2635815 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2635815 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2635815' 00:30:37.645 killing process with pid 2635815 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2635815 00:30:37.645 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2635815 00:30:37.904 00:30:37.904 real 0m18.797s 00:30:37.904 user 0m39.227s 00:30:37.904 sys 0m4.777s 00:30:37.904 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:37.904 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:37.904 ************************************ 00:30:37.904 END TEST nvmf_digest_error 00:30:37.904 ************************************ 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:38.165 rmmod nvme_tcp 00:30:38.165 rmmod nvme_fabrics 00:30:38.165 rmmod nvme_keyring 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2635815 ']' 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2635815 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2635815 ']' 00:30:38.165 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2635815 00:30:38.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2635815) - No such process 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2635815 is not found' 00:30:38.166 Process with pid 2635815 is not found 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.166 14:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.070 14:24:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:40.070 00:30:40.070 real 0m41.964s 00:30:40.070 user 1m18.197s 00:30:40.070 sys 0m11.628s 00:30:40.070 14:24:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:40.070 14:24:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:40.070 ************************************ 00:30:40.070 END TEST nvmf_digest 00:30:40.070 ************************************ 00:30:40.329 14:24:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:40.329 14:24:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:40.329 14:24:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:40.329 14:24:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:40.329 14:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:40.329 14:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:40.329 14:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.329 ************************************ 00:30:40.329 START TEST nvmf_bdevperf 00:30:40.329 ************************************ 00:30:40.329 14:24:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:40.329 * Looking for test storage... 00:30:40.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.329 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.330 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:40.330 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:40.330 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:40.330 14:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:42.865 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:42.865 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:42.865 Found net devices under 0000:84:00.0: cvl_0_0 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:42.865 Found net devices under 0000:84:00.1: cvl_0_1 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.865 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.866 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.866 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.866 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.866 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.866 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.866 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.866 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:43.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:30:43.124 00:30:43.124 --- 10.0.0.2 ping statistics --- 00:30:43.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.124 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:30:43.124 00:30:43.124 --- 10.0.0.1 ping statistics --- 00:30:43.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.124 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:30:43.124 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2639962 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2639962 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2639962 ']' 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:43.125 14:24:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.125 [2024-07-26 14:24:59.939530] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:43.125 [2024-07-26 14:24:59.939622] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.125 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.383 [2024-07-26 14:25:00.035623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:43.383 [2024-07-26 14:25:00.185767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.383 [2024-07-26 14:25:00.185844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.383 [2024-07-26 14:25:00.185865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.383 [2024-07-26 14:25:00.185881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.383 [2024-07-26 14:25:00.185910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.383 [2024-07-26 14:25:00.185993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.383 [2024-07-26 14:25:00.186049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.384 [2024-07-26 14:25:00.186053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.642 [2024-07-26 14:25:00.355056] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.642 Malloc0 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.642 [2024-07-26 14:25:00.421484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.642 { 00:30:43.642 "params": { 00:30:43.642 "name": "Nvme$subsystem", 00:30:43.642 "trtype": "$TEST_TRANSPORT", 00:30:43.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.642 "adrfam": "ipv4", 00:30:43.642 "trsvcid": "$NVMF_PORT", 00:30:43.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.642 "hdgst": ${hdgst:-false}, 00:30:43.642 "ddgst": ${ddgst:-false} 00:30:43.642 }, 00:30:43.642 "method": "bdev_nvme_attach_controller" 00:30:43.642 } 00:30:43.642 EOF 00:30:43.642 )") 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:43.642 14:25:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:43.642 "params": { 00:30:43.642 "name": "Nvme1", 00:30:43.642 "trtype": "tcp", 00:30:43.642 "traddr": "10.0.0.2", 00:30:43.642 "adrfam": "ipv4", 00:30:43.642 "trsvcid": "4420", 00:30:43.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.642 "hdgst": false, 00:30:43.642 "ddgst": false 00:30:43.642 }, 00:30:43.642 "method": "bdev_nvme_attach_controller" 00:30:43.642 }' 00:30:43.642 [2024-07-26 14:25:00.475323] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:43.643 [2024-07-26 14:25:00.475412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640095 ] 00:30:43.643 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.901 [2024-07-26 14:25:00.546656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.901 [2024-07-26 14:25:00.671116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.159 Running I/O for 1 seconds... 00:30:45.093 00:30:45.093 Latency(us) 00:30:45.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.093 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:45.093 Verification LBA range: start 0x0 length 0x4000 00:30:45.093 Nvme1n1 : 1.01 7933.68 30.99 0.00 0.00 16064.86 3640.89 14175.19 00:30:45.093 =================================================================================================================== 00:30:45.093 Total : 7933.68 30.99 0.00 0.00 16064.86 3640.89 14175.19 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2640247 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.352 { 00:30:45.352 "params": { 00:30:45.352 "name": "Nvme$subsystem", 00:30:45.352 "trtype": "$TEST_TRANSPORT", 00:30:45.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.352 "adrfam": "ipv4", 00:30:45.352 "trsvcid": "$NVMF_PORT", 00:30:45.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.352 "hdgst": ${hdgst:-false}, 00:30:45.352 "ddgst": ${ddgst:-false} 00:30:45.352 }, 00:30:45.352 "method": "bdev_nvme_attach_controller" 00:30:45.352 } 00:30:45.352 EOF 00:30:45.352 )") 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:45.352 14:25:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:45.352 "params": { 00:30:45.352 "name": "Nvme1", 00:30:45.352 "trtype": "tcp", 00:30:45.352 "traddr": "10.0.0.2", 00:30:45.352 "adrfam": "ipv4", 00:30:45.352 "trsvcid": "4420", 00:30:45.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.352 "hdgst": false, 00:30:45.352 "ddgst": false 00:30:45.352 }, 00:30:45.352 "method": "bdev_nvme_attach_controller" 00:30:45.352 }' 00:30:45.611 [2024-07-26 14:25:02.273078] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:45.611 [2024-07-26 14:25:02.273166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640247 ] 00:30:45.611 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.611 [2024-07-26 14:25:02.343944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.611 [2024-07-26 14:25:02.467086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.869 Running I/O for 15 seconds... 00:30:48.404 14:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2639962 00:30:48.404 14:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:48.404 [2024-07-26 14:25:05.234734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.234790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.234846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.404 [2024-07-26 14:25:05.234866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.234887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.404 [2024-07-26 14:25:05.234905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.234924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.404 [2024-07-26 14:25:05.234940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.234957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.404 [2024-07-26 14:25:05.234974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.234991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.404 [2024-07-26 14:25:05.235009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.404 [2024-07-26 14:25:05.235044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.404 [2024-07-26 14:25:05.235080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.404 [2024-07-26 14:25:05.235788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.404 [2024-07-26 14:25:05.235804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.235821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.235836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.235853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.235869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.235886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.235902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.235919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.235935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.235952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.235968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.235985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.405 [2024-07-26 14:25:05.236716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.405 [2024-07-26 14:25:05.236748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.405 [2024-07-26 14:25:05.236781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.405 [2024-07-26 14:25:05.236813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.405 [2024-07-26 14:25:05.236846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.405 [2024-07-26 14:25:05.236879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.405 [2024-07-26 14:25:05.236911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.405 [2024-07-26 14:25:05.236944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.405 [2024-07-26 14:25:05.236976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.405 [2024-07-26 14:25:05.236997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.406 [2024-07-26 14:25:05.237014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.406 [2024-07-26 14:25:05.237046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.406 [2024-07-26 14:25:05.237080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.406 [2024-07-26 14:25:05.237113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.406 [2024-07-26 14:25:05.237146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.406 [2024-07-26 14:25:05.237179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.406 [2024-07-26 14:25:05.237599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.237969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.237986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.238002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.238018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.238034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.238051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.238067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.238084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.238100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.238117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.406 [2024-07-26 14:25:05.238133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.238150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.238167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.238185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.406 [2024-07-26 14:25:05.238200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.406 [2024-07-26 14:25:05.238218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.238982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.238998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.239015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.239031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.239048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.239064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.239081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.239097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.239114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.407 [2024-07-26 14:25:05.239130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.239150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d88a0 is same with the state(5) to be set 00:30:48.407 [2024-07-26 14:25:05.239172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.407 [2024-07-26 14:25:05.239186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.407 [2024-07-26 14:25:05.239200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28952 len:8 PRP1 0x0 PRP2 0x0 00:30:48.407 [2024-07-26 14:25:05.239214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.407 [2024-07-26 14:25:05.239288] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16d88a0 was disconnected and freed. reset controller. 00:30:48.407 [2024-07-26 14:25:05.243097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.407 [2024-07-26 14:25:05.243175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.407 [2024-07-26 14:25:05.243964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.407 [2024-07-26 14:25:05.243998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.407 [2024-07-26 14:25:05.244017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.407 [2024-07-26 14:25:05.244258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.407 [2024-07-26 14:25:05.244516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.407 [2024-07-26 14:25:05.244551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.407 [2024-07-26 14:25:05.244571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.407 [2024-07-26 14:25:05.248157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.407 [2024-07-26 14:25:05.257288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.407 [2024-07-26 14:25:05.257929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.407 [2024-07-26 14:25:05.257974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.407 [2024-07-26 14:25:05.257994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.407 [2024-07-26 14:25:05.258240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.408 [2024-07-26 14:25:05.258499] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.408 [2024-07-26 14:25:05.258523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.408 [2024-07-26 14:25:05.258540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.408 [2024-07-26 14:25:05.262125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.408 [2024-07-26 14:25:05.271223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.408 [2024-07-26 14:25:05.271871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.408 [2024-07-26 14:25:05.271915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.408 [2024-07-26 14:25:05.271936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.408 [2024-07-26 14:25:05.272181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.408 [2024-07-26 14:25:05.272449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.408 [2024-07-26 14:25:05.272474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.408 [2024-07-26 14:25:05.272490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.408 [2024-07-26 14:25:05.276071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.408 [2024-07-26 14:25:05.285172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.408 [2024-07-26 14:25:05.285785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.408 [2024-07-26 14:25:05.285847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.408 [2024-07-26 14:25:05.285867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.408 [2024-07-26 14:25:05.286113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.408 [2024-07-26 14:25:05.286357] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.408 [2024-07-26 14:25:05.286380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.408 [2024-07-26 14:25:05.286396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.667 [2024-07-26 14:25:05.289991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.667 [2024-07-26 14:25:05.299078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.667 [2024-07-26 14:25:05.299694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.667 [2024-07-26 14:25:05.299738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.667 [2024-07-26 14:25:05.299758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.667 [2024-07-26 14:25:05.300003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.667 [2024-07-26 14:25:05.300246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.667 [2024-07-26 14:25:05.300269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.667 [2024-07-26 14:25:05.300284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.667 [2024-07-26 14:25:05.303880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.667 [2024-07-26 14:25:05.312983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.667 [2024-07-26 14:25:05.313590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.667 [2024-07-26 14:25:05.313649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.667 [2024-07-26 14:25:05.313669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.667 [2024-07-26 14:25:05.313914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.667 [2024-07-26 14:25:05.314158] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.667 [2024-07-26 14:25:05.314182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.667 [2024-07-26 14:25:05.314197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.667 [2024-07-26 14:25:05.317799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.667 [2024-07-26 14:25:05.326890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.667 [2024-07-26 14:25:05.327517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.667 [2024-07-26 14:25:05.327561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.667 [2024-07-26 14:25:05.327581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.667 [2024-07-26 14:25:05.327826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.667 [2024-07-26 14:25:05.328070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.667 [2024-07-26 14:25:05.328093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.667 [2024-07-26 14:25:05.328109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.667 [2024-07-26 14:25:05.331704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.667 [2024-07-26 14:25:05.340794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.667 [2024-07-26 14:25:05.341383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.667 [2024-07-26 14:25:05.341441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.667 [2024-07-26 14:25:05.341461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.667 [2024-07-26 14:25:05.341700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.667 [2024-07-26 14:25:05.341942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.667 [2024-07-26 14:25:05.341965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.667 [2024-07-26 14:25:05.341980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.667 [2024-07-26 14:25:05.345568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.667 [2024-07-26 14:25:05.354653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.667 [2024-07-26 14:25:05.355188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.667 [2024-07-26 14:25:05.355236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.667 [2024-07-26 14:25:05.355254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.667 [2024-07-26 14:25:05.355506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.667 [2024-07-26 14:25:05.355750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.667 [2024-07-26 14:25:05.355773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.355788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.359363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.368676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.369147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.369198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.369224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.369475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.369718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.369741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.369756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.373339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.382682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.383187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.383237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.383255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.383505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.383749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.383772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.383787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.387365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.396685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.397256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.397305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.397323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.397572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.397815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.397838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.397853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.401438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.410548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.411105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.411154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.411171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.411410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.411666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.411696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.411712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.415292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.424596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.425094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.425146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.425163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.425402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.425655] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.425679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.425695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.429273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.438574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.439122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.439171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.439189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.439440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.439683] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.439706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.439721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.443302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.452605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.453169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.453218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.453235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.453485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.453727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.453750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.453765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.457347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.466653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.467211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.467263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.467280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.467531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.467775] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.467797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.668 [2024-07-26 14:25:05.467812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.668 [2024-07-26 14:25:05.471389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.668 [2024-07-26 14:25:05.480691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.668 [2024-07-26 14:25:05.481206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.668 [2024-07-26 14:25:05.481238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.668 [2024-07-26 14:25:05.481256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.668 [2024-07-26 14:25:05.481509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.668 [2024-07-26 14:25:05.481753] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.668 [2024-07-26 14:25:05.481776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.669 [2024-07-26 14:25:05.481791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.669 [2024-07-26 14:25:05.485374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.669 [2024-07-26 14:25:05.494686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.669 [2024-07-26 14:25:05.495163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.669 [2024-07-26 14:25:05.495194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.669 [2024-07-26 14:25:05.495211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.669 [2024-07-26 14:25:05.495459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.669 [2024-07-26 14:25:05.495702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.669 [2024-07-26 14:25:05.495725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.669 [2024-07-26 14:25:05.495740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.669 [2024-07-26 14:25:05.499320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.669 [2024-07-26 14:25:05.508632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.669 [2024-07-26 14:25:05.509263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.669 [2024-07-26 14:25:05.509307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.669 [2024-07-26 14:25:05.509327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.669 [2024-07-26 14:25:05.509592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.669 [2024-07-26 14:25:05.509837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.669 [2024-07-26 14:25:05.509860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.669 [2024-07-26 14:25:05.509876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.669 [2024-07-26 14:25:05.513470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.669 [2024-07-26 14:25:05.522566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.669 [2024-07-26 14:25:05.523068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.669 [2024-07-26 14:25:05.523100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.669 [2024-07-26 14:25:05.523119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.669 [2024-07-26 14:25:05.523358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.669 [2024-07-26 14:25:05.523614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.669 [2024-07-26 14:25:05.523638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.669 [2024-07-26 14:25:05.523654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.669 [2024-07-26 14:25:05.527232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.669 [2024-07-26 14:25:05.536529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.669 [2024-07-26 14:25:05.537173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.669 [2024-07-26 14:25:05.537216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.669 [2024-07-26 14:25:05.537236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.669 [2024-07-26 14:25:05.537497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.669 [2024-07-26 14:25:05.537742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.669 [2024-07-26 14:25:05.537765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.669 [2024-07-26 14:25:05.537780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.669 [2024-07-26 14:25:05.541360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.669 [2024-07-26 14:25:05.550456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.669 [2024-07-26 14:25:05.550991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.669 [2024-07-26 14:25:05.551024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.669 [2024-07-26 14:25:05.551042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.669 [2024-07-26 14:25:05.551282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.669 [2024-07-26 14:25:05.551539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.669 [2024-07-26 14:25:05.551563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.669 [2024-07-26 14:25:05.551585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.928 [2024-07-26 14:25:05.555164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.564468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.565094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.565137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.565157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.565402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.565660] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.929 [2024-07-26 14:25:05.565685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.929 [2024-07-26 14:25:05.565701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.929 [2024-07-26 14:25:05.569282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.578395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.578877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.578910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.578929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.579168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.579411] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.929 [2024-07-26 14:25:05.579442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.929 [2024-07-26 14:25:05.579480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.929 [2024-07-26 14:25:05.583067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.592363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.592981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.593025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.593045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.593291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.593549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.929 [2024-07-26 14:25:05.593573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.929 [2024-07-26 14:25:05.593589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.929 [2024-07-26 14:25:05.597170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.606258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.606761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.606794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.606812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.607051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.607294] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.929 [2024-07-26 14:25:05.607317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.929 [2024-07-26 14:25:05.607333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.929 [2024-07-26 14:25:05.610937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.620227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.620731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.620763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.620781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.621019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.621262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.929 [2024-07-26 14:25:05.621285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.929 [2024-07-26 14:25:05.621301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.929 [2024-07-26 14:25:05.624888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.634180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.634809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.634853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.634873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.635118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.635361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.929 [2024-07-26 14:25:05.635385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.929 [2024-07-26 14:25:05.635401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.929 [2024-07-26 14:25:05.638990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.648074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.648570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.648603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.648621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.648866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.649110] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.929 [2024-07-26 14:25:05.649133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.929 [2024-07-26 14:25:05.649148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.929 [2024-07-26 14:25:05.652734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.662031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.662537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.662569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.662587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.662826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.663076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.929 [2024-07-26 14:25:05.663099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.929 [2024-07-26 14:25:05.663114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.929 [2024-07-26 14:25:05.666700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.929 [2024-07-26 14:25:05.676001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.929 [2024-07-26 14:25:05.676519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.929 [2024-07-26 14:25:05.676551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.929 [2024-07-26 14:25:05.676569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.929 [2024-07-26 14:25:05.676807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.929 [2024-07-26 14:25:05.677050] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.677073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.677089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.680687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.690001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.690515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.690547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.930 [2024-07-26 14:25:05.690565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.930 [2024-07-26 14:25:05.690804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.930 [2024-07-26 14:25:05.691048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.691072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.691094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.694688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.704037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.704551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.704582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.930 [2024-07-26 14:25:05.704600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.930 [2024-07-26 14:25:05.704839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.930 [2024-07-26 14:25:05.705082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.705106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.705120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.708706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.718022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.718547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.718579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.930 [2024-07-26 14:25:05.718597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.930 [2024-07-26 14:25:05.718835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.930 [2024-07-26 14:25:05.719088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.719111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.719126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.722722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.732041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.732577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.732609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.930 [2024-07-26 14:25:05.732627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.930 [2024-07-26 14:25:05.732865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.930 [2024-07-26 14:25:05.733108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.733131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.733146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.736744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.746047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.746553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.746590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.930 [2024-07-26 14:25:05.746609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.930 [2024-07-26 14:25:05.746848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.930 [2024-07-26 14:25:05.747090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.747113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.747129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.750716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.760013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.760519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.760551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.930 [2024-07-26 14:25:05.760568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.930 [2024-07-26 14:25:05.760806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.930 [2024-07-26 14:25:05.761049] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.761072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.761087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.764679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.773987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.774497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.774529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.930 [2024-07-26 14:25:05.774547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.930 [2024-07-26 14:25:05.774784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.930 [2024-07-26 14:25:05.775027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.775050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.775065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.778650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.787964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.788529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.788561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.930 [2024-07-26 14:25:05.788579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.930 [2024-07-26 14:25:05.788818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.930 [2024-07-26 14:25:05.789067] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.930 [2024-07-26 14:25:05.789091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.930 [2024-07-26 14:25:05.789106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.930 [2024-07-26 14:25:05.792711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.930 [2024-07-26 14:25:05.802014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.930 [2024-07-26 14:25:05.802530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.930 [2024-07-26 14:25:05.802562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:48.931 [2024-07-26 14:25:05.802580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:48.931 [2024-07-26 14:25:05.802819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:48.931 [2024-07-26 14:25:05.803062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.931 [2024-07-26 14:25:05.803086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.931 [2024-07-26 14:25:05.803101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.931 [2024-07-26 14:25:05.806689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.190 [2024-07-26 14:25:05.816006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.190 [2024-07-26 14:25:05.816529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.190 [2024-07-26 14:25:05.816561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.190 [2024-07-26 14:25:05.816579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.190 [2024-07-26 14:25:05.816817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.190 [2024-07-26 14:25:05.817060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.190 [2024-07-26 14:25:05.817083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.190 [2024-07-26 14:25:05.817099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.190 [2024-07-26 14:25:05.820690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.190 [2024-07-26 14:25:05.830001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.190 [2024-07-26 14:25:05.830417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.190 [2024-07-26 14:25:05.830455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.190 [2024-07-26 14:25:05.830473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.190 [2024-07-26 14:25:05.830711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.190 [2024-07-26 14:25:05.830953] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.190 [2024-07-26 14:25:05.830976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.190 [2024-07-26 14:25:05.830992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.190 [2024-07-26 14:25:05.834588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.190 [2024-07-26 14:25:05.843892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.190 [2024-07-26 14:25:05.844326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.190 [2024-07-26 14:25:05.844377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.190 [2024-07-26 14:25:05.844395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.190 [2024-07-26 14:25:05.844643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.190 [2024-07-26 14:25:05.844886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.190 [2024-07-26 14:25:05.844910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.190 [2024-07-26 14:25:05.844925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.190 [2024-07-26 14:25:05.848514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.190 [2024-07-26 14:25:05.857816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.190 [2024-07-26 14:25:05.858365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.190 [2024-07-26 14:25:05.858415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.190 [2024-07-26 14:25:05.858441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.190 [2024-07-26 14:25:05.858681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.190 [2024-07-26 14:25:05.858924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.190 [2024-07-26 14:25:05.858947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.190 [2024-07-26 14:25:05.858962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.190 [2024-07-26 14:25:05.862555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.190 [2024-07-26 14:25:05.871855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.190 [2024-07-26 14:25:05.872370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.190 [2024-07-26 14:25:05.872420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.190 [2024-07-26 14:25:05.872448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.190 [2024-07-26 14:25:05.872688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.190 [2024-07-26 14:25:05.872931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.190 [2024-07-26 14:25:05.872954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.190 [2024-07-26 14:25:05.872969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.190 [2024-07-26 14:25:05.876554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.190 [2024-07-26 14:25:05.885853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.190 [2024-07-26 14:25:05.886381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.190 [2024-07-26 14:25:05.886448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.190 [2024-07-26 14:25:05.886478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.190 [2024-07-26 14:25:05.886718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.190 [2024-07-26 14:25:05.886960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.190 [2024-07-26 14:25:05.886983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.190 [2024-07-26 14:25:05.886998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.190 [2024-07-26 14:25:05.890579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.190 [2024-07-26 14:25:05.899869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.190 [2024-07-26 14:25:05.900379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.190 [2024-07-26 14:25:05.900445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.190 [2024-07-26 14:25:05.900464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.190 [2024-07-26 14:25:05.900703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.190 [2024-07-26 14:25:05.900946] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.190 [2024-07-26 14:25:05.900969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.190 [2024-07-26 14:25:05.900983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.190 [2024-07-26 14:25:05.904569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.190 [2024-07-26 14:25:05.913865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.190 [2024-07-26 14:25:05.914386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.190 [2024-07-26 14:25:05.914442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.190 [2024-07-26 14:25:05.914462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.190 [2024-07-26 14:25:05.914701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:05.914944] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.191 [2024-07-26 14:25:05.914967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.191 [2024-07-26 14:25:05.914982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.191 [2024-07-26 14:25:05.918568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.191 [2024-07-26 14:25:05.927857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.191 [2024-07-26 14:25:05.928462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.191 [2024-07-26 14:25:05.928506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.191 [2024-07-26 14:25:05.928526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.191 [2024-07-26 14:25:05.928772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:05.929016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.191 [2024-07-26 14:25:05.929046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.191 [2024-07-26 14:25:05.929062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.191 [2024-07-26 14:25:05.932652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.191 [2024-07-26 14:25:05.941732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.191 [2024-07-26 14:25:05.942359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.191 [2024-07-26 14:25:05.942403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.191 [2024-07-26 14:25:05.942422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.191 [2024-07-26 14:25:05.942683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:05.942928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.191 [2024-07-26 14:25:05.942951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.191 [2024-07-26 14:25:05.942967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.191 [2024-07-26 14:25:05.946556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.191 [2024-07-26 14:25:05.955635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.191 [2024-07-26 14:25:05.956160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.191 [2024-07-26 14:25:05.956211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.191 [2024-07-26 14:25:05.956229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.191 [2024-07-26 14:25:05.956480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:05.956723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.191 [2024-07-26 14:25:05.956746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.191 [2024-07-26 14:25:05.956761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.191 [2024-07-26 14:25:05.960336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.191 [2024-07-26 14:25:05.969634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.191 [2024-07-26 14:25:05.970182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.191 [2024-07-26 14:25:05.970234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.191 [2024-07-26 14:25:05.970252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.191 [2024-07-26 14:25:05.970503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:05.970747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.191 [2024-07-26 14:25:05.970770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.191 [2024-07-26 14:25:05.970785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.191 [2024-07-26 14:25:05.974360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.191 [2024-07-26 14:25:05.983672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.191 [2024-07-26 14:25:05.984172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.191 [2024-07-26 14:25:05.984223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.191 [2024-07-26 14:25:05.984241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.191 [2024-07-26 14:25:05.984492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:05.984735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.191 [2024-07-26 14:25:05.984759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.191 [2024-07-26 14:25:05.984774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.191 [2024-07-26 14:25:05.988350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.191 [2024-07-26 14:25:05.997644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.191 [2024-07-26 14:25:05.998158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.191 [2024-07-26 14:25:05.998210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.191 [2024-07-26 14:25:05.998228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.191 [2024-07-26 14:25:05.998476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:05.998720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.191 [2024-07-26 14:25:05.998743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.191 [2024-07-26 14:25:05.998759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.191 [2024-07-26 14:25:06.002334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.191 [2024-07-26 14:25:06.011644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.191 [2024-07-26 14:25:06.012169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.191 [2024-07-26 14:25:06.012218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.191 [2024-07-26 14:25:06.012236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.191 [2024-07-26 14:25:06.012486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:06.012729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.191 [2024-07-26 14:25:06.012752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.191 [2024-07-26 14:25:06.012767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.191 [2024-07-26 14:25:06.016344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.191 [2024-07-26 14:25:06.025550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.191 [2024-07-26 14:25:06.026098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.191 [2024-07-26 14:25:06.026149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.191 [2024-07-26 14:25:06.026167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.191 [2024-07-26 14:25:06.026411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.191 [2024-07-26 14:25:06.026665] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.192 [2024-07-26 14:25:06.026689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.192 [2024-07-26 14:25:06.026704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.192 [2024-07-26 14:25:06.030280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.192 [2024-07-26 14:25:06.039577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.192 [2024-07-26 14:25:06.040102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.192 [2024-07-26 14:25:06.040152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.192 [2024-07-26 14:25:06.040169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.192 [2024-07-26 14:25:06.040408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.192 [2024-07-26 14:25:06.040661] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.192 [2024-07-26 14:25:06.040686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.192 [2024-07-26 14:25:06.040701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.192 [2024-07-26 14:25:06.044278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.192 [2024-07-26 14:25:06.053573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.192 [2024-07-26 14:25:06.054098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.192 [2024-07-26 14:25:06.054147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.192 [2024-07-26 14:25:06.054165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.192 [2024-07-26 14:25:06.054403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.192 [2024-07-26 14:25:06.054655] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.192 [2024-07-26 14:25:06.054679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.192 [2024-07-26 14:25:06.054694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.192 [2024-07-26 14:25:06.058269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.192 [2024-07-26 14:25:06.067563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.192 [2024-07-26 14:25:06.068100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.192 [2024-07-26 14:25:06.068149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.192 [2024-07-26 14:25:06.068167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.192 [2024-07-26 14:25:06.068405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.192 [2024-07-26 14:25:06.068659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.192 [2024-07-26 14:25:06.068682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.192 [2024-07-26 14:25:06.068704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.192 [2024-07-26 14:25:06.072280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.451 [2024-07-26 14:25:06.081580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.451 [2024-07-26 14:25:06.082165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.451 [2024-07-26 14:25:06.082216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.451 [2024-07-26 14:25:06.082233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.451 [2024-07-26 14:25:06.082487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.451 [2024-07-26 14:25:06.082730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.451 [2024-07-26 14:25:06.082753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.451 [2024-07-26 14:25:06.082768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.451 [2024-07-26 14:25:06.086344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.451 [2024-07-26 14:25:06.095432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.451 [2024-07-26 14:25:06.095955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.451 [2024-07-26 14:25:06.096006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.451 [2024-07-26 14:25:06.096023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.451 [2024-07-26 14:25:06.096261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.451 [2024-07-26 14:25:06.096516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.451 [2024-07-26 14:25:06.096540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.451 [2024-07-26 14:25:06.096555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.451 [2024-07-26 14:25:06.100132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.451 [2024-07-26 14:25:06.109443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.451 [2024-07-26 14:25:06.110001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.451 [2024-07-26 14:25:06.110051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.451 [2024-07-26 14:25:06.110069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.451 [2024-07-26 14:25:06.110307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.451 [2024-07-26 14:25:06.110560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.451 [2024-07-26 14:25:06.110584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.451 [2024-07-26 14:25:06.110600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.451 [2024-07-26 14:25:06.114190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.451 [2024-07-26 14:25:06.123482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.124004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.124052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.124070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.124307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.124561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.124585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.124600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.128174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.137469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.138012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.138063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.138081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.138319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.138573] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.138597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.138612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.142188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.151496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.152028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.152079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.152096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.152335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.152588] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.152612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.152627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.156201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.165493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.166006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.166056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.166074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.166323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.166577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.166600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.166615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.170189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.179489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.180031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.180080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.180097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.180335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.180586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.180610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.180627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.184211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.193517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.194068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.194117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.194135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.194373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.194626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.194650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.194665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.198243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.207551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.208032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.208084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.208101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.208340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.208592] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.208615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.208631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.212226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.221525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.222016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.222067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.222085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.222323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.222575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.222599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.222615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.226189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.235489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.452 [2024-07-26 14:25:06.235976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.452 [2024-07-26 14:25:06.236026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.452 [2024-07-26 14:25:06.236043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.452 [2024-07-26 14:25:06.236282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.452 [2024-07-26 14:25:06.236533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.452 [2024-07-26 14:25:06.236556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.452 [2024-07-26 14:25:06.236571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.452 [2024-07-26 14:25:06.240146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.452 [2024-07-26 14:25:06.249446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.453 [2024-07-26 14:25:06.249968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-07-26 14:25:06.250020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.453 [2024-07-26 14:25:06.250038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.453 [2024-07-26 14:25:06.250276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.453 [2024-07-26 14:25:06.250529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.453 [2024-07-26 14:25:06.250553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.453 [2024-07-26 14:25:06.250569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.453 [2024-07-26 14:25:06.254143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.453 [2024-07-26 14:25:06.263447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.453 [2024-07-26 14:25:06.263938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-07-26 14:25:06.263994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.453 [2024-07-26 14:25:06.264013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.453 [2024-07-26 14:25:06.264252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.453 [2024-07-26 14:25:06.264505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.453 [2024-07-26 14:25:06.264529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.453 [2024-07-26 14:25:06.264544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.453 [2024-07-26 14:25:06.268349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.453 [2024-07-26 14:25:06.277440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.453 [2024-07-26 14:25:06.277931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-07-26 14:25:06.277982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.453 [2024-07-26 14:25:06.278000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.453 [2024-07-26 14:25:06.278239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.453 [2024-07-26 14:25:06.278491] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.453 [2024-07-26 14:25:06.278515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.453 [2024-07-26 14:25:06.278530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.453 [2024-07-26 14:25:06.282111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.453 [2024-07-26 14:25:06.291414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.453 [2024-07-26 14:25:06.291941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-07-26 14:25:06.291994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.453 [2024-07-26 14:25:06.292011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.453 [2024-07-26 14:25:06.292259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.453 [2024-07-26 14:25:06.292515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.453 [2024-07-26 14:25:06.292539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.453 [2024-07-26 14:25:06.292554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.453 [2024-07-26 14:25:06.296149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.453 [2024-07-26 14:25:06.305448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.453 [2024-07-26 14:25:06.306010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-07-26 14:25:06.306060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.453 [2024-07-26 14:25:06.306077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.453 [2024-07-26 14:25:06.306315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.453 [2024-07-26 14:25:06.306574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.453 [2024-07-26 14:25:06.306599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.453 [2024-07-26 14:25:06.306615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.453 [2024-07-26 14:25:06.310192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.453 [2024-07-26 14:25:06.319291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.453 [2024-07-26 14:25:06.319778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-07-26 14:25:06.319830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.453 [2024-07-26 14:25:06.319848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.453 [2024-07-26 14:25:06.320087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.453 [2024-07-26 14:25:06.320330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.453 [2024-07-26 14:25:06.320352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.453 [2024-07-26 14:25:06.320368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.453 [2024-07-26 14:25:06.323956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.453 [2024-07-26 14:25:06.333250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.453 [2024-07-26 14:25:06.333723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.453 [2024-07-26 14:25:06.333777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.453 [2024-07-26 14:25:06.333796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.453 [2024-07-26 14:25:06.334034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.453 [2024-07-26 14:25:06.334277] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.453 [2024-07-26 14:25:06.334300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.453 [2024-07-26 14:25:06.334316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.712 [2024-07-26 14:25:06.337899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.712 [2024-07-26 14:25:06.347190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.712 [2024-07-26 14:25:06.347664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.712 [2024-07-26 14:25:06.347711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.712 [2024-07-26 14:25:06.347729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.712 [2024-07-26 14:25:06.347968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.712 [2024-07-26 14:25:06.348211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.712 [2024-07-26 14:25:06.348234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.712 [2024-07-26 14:25:06.348249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.712 [2024-07-26 14:25:06.351834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.712 [2024-07-26 14:25:06.361140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.712 [2024-07-26 14:25:06.361609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.712 [2024-07-26 14:25:06.361659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.712 [2024-07-26 14:25:06.361677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.712 [2024-07-26 14:25:06.361916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.712 [2024-07-26 14:25:06.362159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.712 [2024-07-26 14:25:06.362183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.712 [2024-07-26 14:25:06.362198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.712 [2024-07-26 14:25:06.365788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.712 [2024-07-26 14:25:06.375085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.712 [2024-07-26 14:25:06.375552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.712 [2024-07-26 14:25:06.375582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.375600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.375838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.376080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.376103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.376118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.379700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.389027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.389549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.713 [2024-07-26 14:25:06.389580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.389598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.389837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.390079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.390103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.390117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.393716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.403014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.403552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.713 [2024-07-26 14:25:06.403593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.403617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.403856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.404099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.404122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.404137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.407732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.417046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.417566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.713 [2024-07-26 14:25:06.417598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.417616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.417854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.418097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.418121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.418136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.421723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.431030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.431587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.713 [2024-07-26 14:25:06.431619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.431636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.431875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.432116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.432139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.432155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.435742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.445027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.445484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.713 [2024-07-26 14:25:06.445516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.445534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.445772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.446014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.446043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.446059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.449646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.458934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.459477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.713 [2024-07-26 14:25:06.459508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.459526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.459764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.460006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.460029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.460045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.463632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.472924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.473414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.713 [2024-07-26 14:25:06.473452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.473470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.473708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.473956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.473979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.473994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.477578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.486875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.487491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.713 [2024-07-26 14:25:06.487557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.713 [2024-07-26 14:25:06.487578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.713 [2024-07-26 14:25:06.487823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.713 [2024-07-26 14:25:06.488074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.713 [2024-07-26 14:25:06.488099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.713 [2024-07-26 14:25:06.488115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.713 [2024-07-26 14:25:06.491708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.713 [2024-07-26 14:25:06.500793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.713 [2024-07-26 14:25:06.501340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.714 [2024-07-26 14:25:06.501391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.714 [2024-07-26 14:25:06.501410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.714 [2024-07-26 14:25:06.501659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.714 [2024-07-26 14:25:06.501903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.714 [2024-07-26 14:25:06.501927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.714 [2024-07-26 14:25:06.501943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.714 [2024-07-26 14:25:06.505527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.714 [2024-07-26 14:25:06.514828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.714 [2024-07-26 14:25:06.515288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.714 [2024-07-26 14:25:06.515341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.714 [2024-07-26 14:25:06.515359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.714 [2024-07-26 14:25:06.515610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.714 [2024-07-26 14:25:06.515854] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.714 [2024-07-26 14:25:06.515878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.714 [2024-07-26 14:25:06.515893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.714 [2024-07-26 14:25:06.519474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.714 [2024-07-26 14:25:06.528761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.714 [2024-07-26 14:25:06.529384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.714 [2024-07-26 14:25:06.529440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.714 [2024-07-26 14:25:06.529462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.714 [2024-07-26 14:25:06.529708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.714 [2024-07-26 14:25:06.529951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.714 [2024-07-26 14:25:06.529975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.714 [2024-07-26 14:25:06.529990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.714 [2024-07-26 14:25:06.533581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.714 [2024-07-26 14:25:06.542670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.714 [2024-07-26 14:25:06.543206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.714 [2024-07-26 14:25:06.543239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.714 [2024-07-26 14:25:06.543257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.714 [2024-07-26 14:25:06.543517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.714 [2024-07-26 14:25:06.543760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.714 [2024-07-26 14:25:06.543784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.714 [2024-07-26 14:25:06.543799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.714 [2024-07-26 14:25:06.547374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.714 [2024-07-26 14:25:06.556668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.714 [2024-07-26 14:25:06.557286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.714 [2024-07-26 14:25:06.557330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.714 [2024-07-26 14:25:06.557349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.714 [2024-07-26 14:25:06.557608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.714 [2024-07-26 14:25:06.557853] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.714 [2024-07-26 14:25:06.557876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.714 [2024-07-26 14:25:06.557892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.714 [2024-07-26 14:25:06.561477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.714 [2024-07-26 14:25:06.570560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.714 [2024-07-26 14:25:06.571185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.714 [2024-07-26 14:25:06.571228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.714 [2024-07-26 14:25:06.571248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.714 [2024-07-26 14:25:06.571509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.714 [2024-07-26 14:25:06.571754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.714 [2024-07-26 14:25:06.571778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.714 [2024-07-26 14:25:06.571793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.714 [2024-07-26 14:25:06.575373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.714 [2024-07-26 14:25:06.584467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.714 [2024-07-26 14:25:06.585007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.714 [2024-07-26 14:25:06.585059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.714 [2024-07-26 14:25:06.585077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.714 [2024-07-26 14:25:06.585316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.714 [2024-07-26 14:25:06.585573] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.714 [2024-07-26 14:25:06.585597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.714 [2024-07-26 14:25:06.585623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.714 [2024-07-26 14:25:06.589202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.974 [2024-07-26 14:25:06.598506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.974 [2024-07-26 14:25:06.599106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.974 [2024-07-26 14:25:06.599166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.974 [2024-07-26 14:25:06.599186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.974 [2024-07-26 14:25:06.599444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.974 [2024-07-26 14:25:06.599689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.974 [2024-07-26 14:25:06.599713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.974 [2024-07-26 14:25:06.599729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.974 [2024-07-26 14:25:06.603306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.974 [2024-07-26 14:25:06.612391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.974 [2024-07-26 14:25:06.613014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.974 [2024-07-26 14:25:06.613058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.974 [2024-07-26 14:25:06.613077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.974 [2024-07-26 14:25:06.613323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.974 [2024-07-26 14:25:06.613580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.974 [2024-07-26 14:25:06.613605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.974 [2024-07-26 14:25:06.613621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.974 [2024-07-26 14:25:06.617201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.974 [2024-07-26 14:25:06.626286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.974 [2024-07-26 14:25:06.626851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.974 [2024-07-26 14:25:06.626903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.974 [2024-07-26 14:25:06.626921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.974 [2024-07-26 14:25:06.627160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.974 [2024-07-26 14:25:06.627402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.974 [2024-07-26 14:25:06.627426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.974 [2024-07-26 14:25:06.627454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.974 [2024-07-26 14:25:06.631032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.974 [2024-07-26 14:25:06.640323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.974 [2024-07-26 14:25:06.640845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.974 [2024-07-26 14:25:06.640898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.974 [2024-07-26 14:25:06.640916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.974 [2024-07-26 14:25:06.641155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.974 [2024-07-26 14:25:06.641397] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.974 [2024-07-26 14:25:06.641421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.974 [2024-07-26 14:25:06.641449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.645027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.654318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.654854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.654905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.654923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.655161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.655404] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.655437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.655455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.659031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.668319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.668812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.668862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.668880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.669118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.669360] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.669383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.669398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.672983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.682271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.682748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.682797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.682814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.683052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.683301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.683324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.683340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.686927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.696213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.696697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.696760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.696778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.697015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.697257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.697280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.697295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.700882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.710171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.710653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.710704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.710722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.710959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.711202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.711225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.711240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.714841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.724126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.724600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.724650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.724669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.724907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.725149] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.725173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.725188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.728781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.738068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.738543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.738598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.738615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.738853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.739096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.739119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.739134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.742720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.752031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.752535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.752567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.752586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.752824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.753067] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.753090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.753105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.975 [2024-07-26 14:25:06.756693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.975 [2024-07-26 14:25:06.765988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.975 [2024-07-26 14:25:06.766484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.975 [2024-07-26 14:25:06.766526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.975 [2024-07-26 14:25:06.766544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.975 [2024-07-26 14:25:06.766782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.975 [2024-07-26 14:25:06.767025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.975 [2024-07-26 14:25:06.767048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.975 [2024-07-26 14:25:06.767063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.976 [2024-07-26 14:25:06.770653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.976 [2024-07-26 14:25:06.779945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.976 [2024-07-26 14:25:06.780517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.976 [2024-07-26 14:25:06.780576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.976 [2024-07-26 14:25:06.780603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.976 [2024-07-26 14:25:06.780849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.976 [2024-07-26 14:25:06.781093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.976 [2024-07-26 14:25:06.781117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.976 [2024-07-26 14:25:06.781132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.976 [2024-07-26 14:25:06.784731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.976 [2024-07-26 14:25:06.793835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.976 [2024-07-26 14:25:06.794342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.976 [2024-07-26 14:25:06.794380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.976 [2024-07-26 14:25:06.794398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.976 [2024-07-26 14:25:06.794650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.976 [2024-07-26 14:25:06.794893] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.976 [2024-07-26 14:25:06.794918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.976 [2024-07-26 14:25:06.794933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.976 [2024-07-26 14:25:06.798520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.976 [2024-07-26 14:25:06.807816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.976 [2024-07-26 14:25:06.808320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.976 [2024-07-26 14:25:06.808352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.976 [2024-07-26 14:25:06.808370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.976 [2024-07-26 14:25:06.808622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.976 [2024-07-26 14:25:06.808865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.976 [2024-07-26 14:25:06.808888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.976 [2024-07-26 14:25:06.808903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.976 [2024-07-26 14:25:06.812492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.976 [2024-07-26 14:25:06.821796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.976 [2024-07-26 14:25:06.822353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.976 [2024-07-26 14:25:06.822402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.976 [2024-07-26 14:25:06.822420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.976 [2024-07-26 14:25:06.822673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.976 [2024-07-26 14:25:06.822922] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.976 [2024-07-26 14:25:06.822945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.976 [2024-07-26 14:25:06.822961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.976 [2024-07-26 14:25:06.826547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.976 [2024-07-26 14:25:06.835845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.976 [2024-07-26 14:25:06.836385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.976 [2024-07-26 14:25:06.836417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.976 [2024-07-26 14:25:06.836448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.976 [2024-07-26 14:25:06.836695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.976 [2024-07-26 14:25:06.836938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.976 [2024-07-26 14:25:06.836961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.976 [2024-07-26 14:25:06.836976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.976 [2024-07-26 14:25:06.840562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.976 [2024-07-26 14:25:06.849852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.976 [2024-07-26 14:25:06.850363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.976 [2024-07-26 14:25:06.850414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:49.976 [2024-07-26 14:25:06.850444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:49.976 [2024-07-26 14:25:06.850686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:49.976 [2024-07-26 14:25:06.850929] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.976 [2024-07-26 14:25:06.850953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.976 [2024-07-26 14:25:06.850968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.976 [2024-07-26 14:25:06.854552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.235 [2024-07-26 14:25:06.863862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.235 [2024-07-26 14:25:06.864310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.235 [2024-07-26 14:25:06.864341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.235 [2024-07-26 14:25:06.864359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.235 [2024-07-26 14:25:06.864609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.235 [2024-07-26 14:25:06.864854] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.235 [2024-07-26 14:25:06.864877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.235 [2024-07-26 14:25:06.864892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.235 [2024-07-26 14:25:06.868481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.235 [2024-07-26 14:25:06.877801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.235 [2024-07-26 14:25:06.878303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.235 [2024-07-26 14:25:06.878334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.235 [2024-07-26 14:25:06.878352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.235 [2024-07-26 14:25:06.878601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.235 [2024-07-26 14:25:06.878844] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.235 [2024-07-26 14:25:06.878868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.235 [2024-07-26 14:25:06.878883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.235 [2024-07-26 14:25:06.882483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.235 [2024-07-26 14:25:06.891803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.235 [2024-07-26 14:25:06.892452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.235 [2024-07-26 14:25:06.892508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.235 [2024-07-26 14:25:06.892528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.235 [2024-07-26 14:25:06.892773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.235 [2024-07-26 14:25:06.893016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.235 [2024-07-26 14:25:06.893040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.235 [2024-07-26 14:25:06.893055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.235 [2024-07-26 14:25:06.896653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.235 [2024-07-26 14:25:06.905760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.235 [2024-07-26 14:25:06.906297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.235 [2024-07-26 14:25:06.906350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.235 [2024-07-26 14:25:06.906368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.235 [2024-07-26 14:25:06.906618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.235 [2024-07-26 14:25:06.906862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.235 [2024-07-26 14:25:06.906885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.235 [2024-07-26 14:25:06.906900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.235 [2024-07-26 14:25:06.910530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.235 [2024-07-26 14:25:06.919664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.235 [2024-07-26 14:25:06.920198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.235 [2024-07-26 14:25:06.920250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.235 [2024-07-26 14:25:06.920274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.235 [2024-07-26 14:25:06.920526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.235 [2024-07-26 14:25:06.920769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.235 [2024-07-26 14:25:06.920793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.235 [2024-07-26 14:25:06.920808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:06.924399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:06.933524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:06.934048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:06.934080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:06.934098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:06.934337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:06.934591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:06.934615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:06.934630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:06.938215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:06.947542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:06.947988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:06.948036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:06.948054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:06.948292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:06.948558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:06.948582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:06.948597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:06.952180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:06.961504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:06.961971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:06.962023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:06.962041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:06.962280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:06.962534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:06.962564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:06.962580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:06.966166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:06.975499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:06.976022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:06.976074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:06.976092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:06.976330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:06.976585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:06.976609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:06.976624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:06.980211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:06.989549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:06.990111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:06.990161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:06.990178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:06.990416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:06.990672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:06.990707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:06.990723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:06.994306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:07.003412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:07.003949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:07.003999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:07.004016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:07.004255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:07.004521] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:07.004545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:07.004560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:07.008145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:07.017482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:07.017961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:07.018010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:07.018028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:07.018266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:07.018522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:07.018546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:07.018561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:07.022150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:07.031485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:07.031998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:07.032029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:07.032047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:07.032285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:07.032540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:07.032565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:07.032580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:07.036163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.236 [2024-07-26 14:25:07.045618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.236 [2024-07-26 14:25:07.046082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.236 [2024-07-26 14:25:07.046134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.236 [2024-07-26 14:25:07.046152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.236 [2024-07-26 14:25:07.046390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.236 [2024-07-26 14:25:07.046643] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.236 [2024-07-26 14:25:07.046668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.236 [2024-07-26 14:25:07.046683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.236 [2024-07-26 14:25:07.050268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.237 [2024-07-26 14:25:07.059610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.237 [2024-07-26 14:25:07.060131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.237 [2024-07-26 14:25:07.060164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.237 [2024-07-26 14:25:07.060182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.237 [2024-07-26 14:25:07.060438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.237 [2024-07-26 14:25:07.060681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.237 [2024-07-26 14:25:07.060705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.237 [2024-07-26 14:25:07.060721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.237 [2024-07-26 14:25:07.064304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.237 [2024-07-26 14:25:07.073632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.237 [2024-07-26 14:25:07.074312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.237 [2024-07-26 14:25:07.074355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.237 [2024-07-26 14:25:07.074375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.237 [2024-07-26 14:25:07.074633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.237 [2024-07-26 14:25:07.074878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.237 [2024-07-26 14:25:07.074901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.237 [2024-07-26 14:25:07.074916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.237 [2024-07-26 14:25:07.078511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.237 [2024-07-26 14:25:07.087622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.237 [2024-07-26 14:25:07.088136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.237 [2024-07-26 14:25:07.088168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.237 [2024-07-26 14:25:07.088186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.237 [2024-07-26 14:25:07.088425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.237 [2024-07-26 14:25:07.088685] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.237 [2024-07-26 14:25:07.088708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.237 [2024-07-26 14:25:07.088724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.237 [2024-07-26 14:25:07.092305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.237 [2024-07-26 14:25:07.101613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.237 [2024-07-26 14:25:07.102112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.237 [2024-07-26 14:25:07.102143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.237 [2024-07-26 14:25:07.102161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.237 [2024-07-26 14:25:07.102400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.237 [2024-07-26 14:25:07.102656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.237 [2024-07-26 14:25:07.102680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.237 [2024-07-26 14:25:07.102701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.237 [2024-07-26 14:25:07.106281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.237 [2024-07-26 14:25:07.115596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.237 [2024-07-26 14:25:07.116129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.237 [2024-07-26 14:25:07.116180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.237 [2024-07-26 14:25:07.116198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.237 [2024-07-26 14:25:07.116451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.237 [2024-07-26 14:25:07.116694] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.237 [2024-07-26 14:25:07.116717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.237 [2024-07-26 14:25:07.116732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.237 [2024-07-26 14:25:07.120311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.496 [2024-07-26 14:25:07.129616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.496 [2024-07-26 14:25:07.130160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.496 [2024-07-26 14:25:07.130211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.496 [2024-07-26 14:25:07.130229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.496 [2024-07-26 14:25:07.130481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.496 [2024-07-26 14:25:07.130724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.496 [2024-07-26 14:25:07.130747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.496 [2024-07-26 14:25:07.130762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.496 [2024-07-26 14:25:07.134339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.496 [2024-07-26 14:25:07.143643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.496 [2024-07-26 14:25:07.144128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.496 [2024-07-26 14:25:07.144159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.496 [2024-07-26 14:25:07.144177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.496 [2024-07-26 14:25:07.144416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.496 [2024-07-26 14:25:07.144672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.496 [2024-07-26 14:25:07.144696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.496 [2024-07-26 14:25:07.144710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.496 [2024-07-26 14:25:07.148288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.496 [2024-07-26 14:25:07.157585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.496 [2024-07-26 14:25:07.158216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.496 [2024-07-26 14:25:07.158266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.496 [2024-07-26 14:25:07.158287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.496 [2024-07-26 14:25:07.158549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.158793] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.158817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.158832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.162414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.171525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.172160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.172203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.172223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.172485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.172730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.172754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.172770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.176351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.185457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.185959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.185991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.186009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.186248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.186506] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.186530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.186545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.190124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.199417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.199962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.199994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.200012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.200252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.200516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.200541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.200556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.204134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.213440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.213936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.213976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.213994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.214232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.214489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.214513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.214529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.218120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.227410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.227910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.227950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.227968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.228206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.228462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.228486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.228502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.232079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.241372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.241932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.241982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.242000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.242238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.242498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.242522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.242537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.246125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.255217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.255748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.255780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.255797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.256036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.256279] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.256301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.256317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.259905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.269201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.269702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.269744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.269761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.270000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.270243] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.497 [2024-07-26 14:25:07.270266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.497 [2024-07-26 14:25:07.270281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.497 [2024-07-26 14:25:07.273871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.497 [2024-07-26 14:25:07.283168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.497 [2024-07-26 14:25:07.283703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.497 [2024-07-26 14:25:07.283751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.497 [2024-07-26 14:25:07.283768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.497 [2024-07-26 14:25:07.284006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.497 [2024-07-26 14:25:07.284249] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.498 [2024-07-26 14:25:07.284272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.498 [2024-07-26 14:25:07.284287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.498 [2024-07-26 14:25:07.287880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.498 [2024-07-26 14:25:07.297179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.498 [2024-07-26 14:25:07.297753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.498 [2024-07-26 14:25:07.297803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.498 [2024-07-26 14:25:07.297837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.498 [2024-07-26 14:25:07.298077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.498 [2024-07-26 14:25:07.298319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.498 [2024-07-26 14:25:07.298342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.498 [2024-07-26 14:25:07.298358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.498 [2024-07-26 14:25:07.301952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.498 [2024-07-26 14:25:07.311042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.498 [2024-07-26 14:25:07.311574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.498 [2024-07-26 14:25:07.311606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.498 [2024-07-26 14:25:07.311624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.498 [2024-07-26 14:25:07.311863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.498 [2024-07-26 14:25:07.312105] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.498 [2024-07-26 14:25:07.312128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.498 [2024-07-26 14:25:07.312143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.498 [2024-07-26 14:25:07.315753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.498 [2024-07-26 14:25:07.325046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.498 [2024-07-26 14:25:07.325572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.498 [2024-07-26 14:25:07.325605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.498 [2024-07-26 14:25:07.325623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.498 [2024-07-26 14:25:07.325862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.498 [2024-07-26 14:25:07.326105] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.498 [2024-07-26 14:25:07.326128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.498 [2024-07-26 14:25:07.326143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.498 [2024-07-26 14:25:07.329734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.498 [2024-07-26 14:25:07.339030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.498 [2024-07-26 14:25:07.339531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.498 [2024-07-26 14:25:07.339563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.498 [2024-07-26 14:25:07.339580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.498 [2024-07-26 14:25:07.339819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.498 [2024-07-26 14:25:07.340061] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.498 [2024-07-26 14:25:07.340091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.498 [2024-07-26 14:25:07.340107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.498 [2024-07-26 14:25:07.343694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.498 [2024-07-26 14:25:07.352989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.498 [2024-07-26 14:25:07.353628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.498 [2024-07-26 14:25:07.353672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.498 [2024-07-26 14:25:07.353692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.498 [2024-07-26 14:25:07.353937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.498 [2024-07-26 14:25:07.354180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.498 [2024-07-26 14:25:07.354203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.498 [2024-07-26 14:25:07.354219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.498 [2024-07-26 14:25:07.357815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.498 [2024-07-26 14:25:07.366899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.498 [2024-07-26 14:25:07.367511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.498 [2024-07-26 14:25:07.367555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.498 [2024-07-26 14:25:07.367575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.498 [2024-07-26 14:25:07.367820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.498 [2024-07-26 14:25:07.368064] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.498 [2024-07-26 14:25:07.368087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.498 [2024-07-26 14:25:07.368102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.498 [2024-07-26 14:25:07.371698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.498 [2024-07-26 14:25:07.380797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.498 [2024-07-26 14:25:07.381310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.498 [2024-07-26 14:25:07.381342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.498 [2024-07-26 14:25:07.381360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.498 [2024-07-26 14:25:07.381612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.498 [2024-07-26 14:25:07.381856] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.498 [2024-07-26 14:25:07.381879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.498 [2024-07-26 14:25:07.381895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.757 [2024-07-26 14:25:07.385485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.757 [2024-07-26 14:25:07.394792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.757 [2024-07-26 14:25:07.395321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.757 [2024-07-26 14:25:07.395354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.757 [2024-07-26 14:25:07.395373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.757 [2024-07-26 14:25:07.395626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.757 [2024-07-26 14:25:07.395869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.395892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.395907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.399506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.408838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.409361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.409392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.409409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.409658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.409902] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.409926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.409941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.413531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.422868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.423361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.423411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.423437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.423685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.423928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.423951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.423967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.427560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.436872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.437393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.437454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.437480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.437720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.437963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.437986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.438001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.441590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.450896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.451372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.451403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.451421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.451668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.451911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.451934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.451950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.455535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.464831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.465301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.465333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.465350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.465610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.465854] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.465878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.465893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.469475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.478767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.479271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.479302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.479319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.479568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.479812] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.479835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.479857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.483446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.492759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.493286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.493339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.493356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.493609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.493852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.493874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.493889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.497479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.506815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.507339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.507371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.507389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.507639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.507883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.507906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.507921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.511508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.520832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.521276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.521308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.758 [2024-07-26 14:25:07.521326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.758 [2024-07-26 14:25:07.521576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.758 [2024-07-26 14:25:07.521819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.758 [2024-07-26 14:25:07.521843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.758 [2024-07-26 14:25:07.521857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.758 [2024-07-26 14:25:07.525444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.758 [2024-07-26 14:25:07.534741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.758 [2024-07-26 14:25:07.535224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.758 [2024-07-26 14:25:07.535274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.759 [2024-07-26 14:25:07.535292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.759 [2024-07-26 14:25:07.535540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.759 [2024-07-26 14:25:07.535784] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.759 [2024-07-26 14:25:07.535807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.759 [2024-07-26 14:25:07.535822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.759 [2024-07-26 14:25:07.539401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.759 [2024-07-26 14:25:07.548712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.759 [2024-07-26 14:25:07.549180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.759 [2024-07-26 14:25:07.549231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.759 [2024-07-26 14:25:07.549248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.759 [2024-07-26 14:25:07.549498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.759 [2024-07-26 14:25:07.549741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.759 [2024-07-26 14:25:07.549764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.759 [2024-07-26 14:25:07.549780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.759 [2024-07-26 14:25:07.553354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.759 [2024-07-26 14:25:07.562647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.759 [2024-07-26 14:25:07.563101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.759 [2024-07-26 14:25:07.563133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.759 [2024-07-26 14:25:07.563150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.759 [2024-07-26 14:25:07.563388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.759 [2024-07-26 14:25:07.563641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.759 [2024-07-26 14:25:07.563665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.759 [2024-07-26 14:25:07.563680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.759 [2024-07-26 14:25:07.567256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.759 [2024-07-26 14:25:07.576573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.759 [2024-07-26 14:25:07.577178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.759 [2024-07-26 14:25:07.577222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.759 [2024-07-26 14:25:07.577242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.759 [2024-07-26 14:25:07.577514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.759 [2024-07-26 14:25:07.577759] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.759 [2024-07-26 14:25:07.577782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.759 [2024-07-26 14:25:07.577798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.759 [2024-07-26 14:25:07.581383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.759 [2024-07-26 14:25:07.590527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.759 [2024-07-26 14:25:07.591001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.759 [2024-07-26 14:25:07.591033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.759 [2024-07-26 14:25:07.591052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.759 [2024-07-26 14:25:07.591291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.759 [2024-07-26 14:25:07.591553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.759 [2024-07-26 14:25:07.591577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.759 [2024-07-26 14:25:07.591592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.759 [2024-07-26 14:25:07.595173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.759 [2024-07-26 14:25:07.604504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.759 [2024-07-26 14:25:07.605112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.759 [2024-07-26 14:25:07.605155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.759 [2024-07-26 14:25:07.605175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.759 [2024-07-26 14:25:07.605420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.759 [2024-07-26 14:25:07.605676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.759 [2024-07-26 14:25:07.605700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.759 [2024-07-26 14:25:07.605715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.759 [2024-07-26 14:25:07.609301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.759 [2024-07-26 14:25:07.618411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.759 [2024-07-26 14:25:07.618887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.759 [2024-07-26 14:25:07.618919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.759 [2024-07-26 14:25:07.618937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.759 [2024-07-26 14:25:07.619176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.759 [2024-07-26 14:25:07.619419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.759 [2024-07-26 14:25:07.619453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.759 [2024-07-26 14:25:07.619476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.759 [2024-07-26 14:25:07.623061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.759 [2024-07-26 14:25:07.632371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.759 [2024-07-26 14:25:07.633009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.759 [2024-07-26 14:25:07.633053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:50.759 [2024-07-26 14:25:07.633072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:50.759 [2024-07-26 14:25:07.633318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:50.759 [2024-07-26 14:25:07.633578] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.759 [2024-07-26 14:25:07.633602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.759 [2024-07-26 14:25:07.633617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.759 [2024-07-26 14:25:07.637200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.018 [2024-07-26 14:25:07.646288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.018 [2024-07-26 14:25:07.646865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.018 [2024-07-26 14:25:07.646914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.018 [2024-07-26 14:25:07.646932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.018 [2024-07-26 14:25:07.647171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.018 [2024-07-26 14:25:07.647413] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.018 [2024-07-26 14:25:07.647452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.018 [2024-07-26 14:25:07.647468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.018 [2024-07-26 14:25:07.651047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.018 [2024-07-26 14:25:07.660130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.018 [2024-07-26 14:25:07.660747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.018 [2024-07-26 14:25:07.660792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.018 [2024-07-26 14:25:07.660812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.018 [2024-07-26 14:25:07.661057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.018 [2024-07-26 14:25:07.661301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.018 [2024-07-26 14:25:07.661324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.018 [2024-07-26 14:25:07.661339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.018 [2024-07-26 14:25:07.664936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.018 [2024-07-26 14:25:07.674018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.018 [2024-07-26 14:25:07.674544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.018 [2024-07-26 14:25:07.674583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.018 [2024-07-26 14:25:07.674602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.018 [2024-07-26 14:25:07.674841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.018 [2024-07-26 14:25:07.675084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.018 [2024-07-26 14:25:07.675107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.018 [2024-07-26 14:25:07.675123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.018 [2024-07-26 14:25:07.678715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.018 [2024-07-26 14:25:07.688013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.018 [2024-07-26 14:25:07.688606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.018 [2024-07-26 14:25:07.688665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.018 [2024-07-26 14:25:07.688685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.018 [2024-07-26 14:25:07.688930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.689174] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.689197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.689212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.692809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.701902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.702537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.702582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.702601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.019 [2024-07-26 14:25:07.702846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.703091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.703114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.703129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.706724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.715816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.716424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.716478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.716498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.019 [2024-07-26 14:25:07.716743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.717005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.717030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.717046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.720638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.729726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.730348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.730391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.730411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.019 [2024-07-26 14:25:07.730668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.730913] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.730937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.730953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.734543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.743636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.744262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.744306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.744325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.019 [2024-07-26 14:25:07.744587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.744832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.744855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.744871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.748488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.757592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.758050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.758083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.758103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.019 [2024-07-26 14:25:07.758343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.758598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.758622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.758637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.762224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.771547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.772090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.772142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.772161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.019 [2024-07-26 14:25:07.772400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.772652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.772676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.772691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.776270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.785580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.786214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.786258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.786278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.019 [2024-07-26 14:25:07.786537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.786782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.786806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.786821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.790410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.799515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.800130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.800182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.800201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.019 [2024-07-26 14:25:07.800462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.019 [2024-07-26 14:25:07.800707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.019 [2024-07-26 14:25:07.800731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.019 [2024-07-26 14:25:07.800746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.019 [2024-07-26 14:25:07.804325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.019 [2024-07-26 14:25:07.813405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.019 [2024-07-26 14:25:07.814043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.019 [2024-07-26 14:25:07.814087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.019 [2024-07-26 14:25:07.814114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.020 [2024-07-26 14:25:07.814360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.020 [2024-07-26 14:25:07.814618] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.020 [2024-07-26 14:25:07.814642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.020 [2024-07-26 14:25:07.814657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.020 [2024-07-26 14:25:07.818252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.020 [2024-07-26 14:25:07.827334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.020 [2024-07-26 14:25:07.827957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.020 [2024-07-26 14:25:07.828002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.020 [2024-07-26 14:25:07.828021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.020 [2024-07-26 14:25:07.828267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.020 [2024-07-26 14:25:07.828525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.020 [2024-07-26 14:25:07.828549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.020 [2024-07-26 14:25:07.828565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.020 [2024-07-26 14:25:07.832142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.020 [2024-07-26 14:25:07.841224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.020 [2024-07-26 14:25:07.841758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.020 [2024-07-26 14:25:07.841792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.020 [2024-07-26 14:25:07.841811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.020 [2024-07-26 14:25:07.842050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.020 [2024-07-26 14:25:07.842293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.020 [2024-07-26 14:25:07.842316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.020 [2024-07-26 14:25:07.842332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.020 [2024-07-26 14:25:07.845919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.020 [2024-07-26 14:25:07.855207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.020 [2024-07-26 14:25:07.855834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.020 [2024-07-26 14:25:07.855877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.020 [2024-07-26 14:25:07.855897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.020 [2024-07-26 14:25:07.856142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.020 [2024-07-26 14:25:07.856385] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.020 [2024-07-26 14:25:07.856415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.020 [2024-07-26 14:25:07.856445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.020 [2024-07-26 14:25:07.860026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.020 [2024-07-26 14:25:07.869113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.020 [2024-07-26 14:25:07.869642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.020 [2024-07-26 14:25:07.869675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.020 [2024-07-26 14:25:07.869694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.020 [2024-07-26 14:25:07.869933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.020 [2024-07-26 14:25:07.870176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.020 [2024-07-26 14:25:07.870199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.020 [2024-07-26 14:25:07.870214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.020 [2024-07-26 14:25:07.873804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.020 [2024-07-26 14:25:07.883130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.020 [2024-07-26 14:25:07.883667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.020 [2024-07-26 14:25:07.883701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.020 [2024-07-26 14:25:07.883720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.020 [2024-07-26 14:25:07.883959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.020 [2024-07-26 14:25:07.884202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.020 [2024-07-26 14:25:07.884225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.020 [2024-07-26 14:25:07.884240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.020 [2024-07-26 14:25:07.887827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.020 [2024-07-26 14:25:07.897124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.020 [2024-07-26 14:25:07.897622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.020 [2024-07-26 14:25:07.897654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.020 [2024-07-26 14:25:07.897672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.020 [2024-07-26 14:25:07.897911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.020 [2024-07-26 14:25:07.898154] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.020 [2024-07-26 14:25:07.898177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.020 [2024-07-26 14:25:07.898192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.020 [2024-07-26 14:25:07.901780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.279 [2024-07-26 14:25:07.911083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.279 [2024-07-26 14:25:07.911614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.279 [2024-07-26 14:25:07.911645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.279 [2024-07-26 14:25:07.911662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.279 [2024-07-26 14:25:07.911900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.279 [2024-07-26 14:25:07.912143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.279 [2024-07-26 14:25:07.912167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.279 [2024-07-26 14:25:07.912182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.279 [2024-07-26 14:25:07.915773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.279 [2024-07-26 14:25:07.925091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.279 [2024-07-26 14:25:07.925558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.279 [2024-07-26 14:25:07.925590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.279 [2024-07-26 14:25:07.925608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.279 [2024-07-26 14:25:07.925847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.279 [2024-07-26 14:25:07.926089] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.279 [2024-07-26 14:25:07.926112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.279 [2024-07-26 14:25:07.926127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.279 [2024-07-26 14:25:07.929722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.279 [2024-07-26 14:25:07.939026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.279 [2024-07-26 14:25:07.939532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.279 [2024-07-26 14:25:07.939563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.279 [2024-07-26 14:25:07.939581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.279 [2024-07-26 14:25:07.939819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.279 [2024-07-26 14:25:07.940062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.279 [2024-07-26 14:25:07.940085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.279 [2024-07-26 14:25:07.940100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:07.943693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:07.952992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:07.953566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:07.953597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:07.953615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:07.953860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:07.954103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:07.954126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:07.954141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:07.957731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:07.967022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:07.967478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:07.967510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:07.967528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:07.967766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:07.968008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:07.968032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:07.968047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:07.971635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:07.980943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:07.981367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:07.981398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:07.981415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:07.981665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:07.981908] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:07.981931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:07.981946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:07.985533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:07.994821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:07.995244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:07.995274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:07.995292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:07.995542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:07.995785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:07.995808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:07.995832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:07.999410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:08.008744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:08.009189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:08.009221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:08.009238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:08.009486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:08.009729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:08.009752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:08.009767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:08.013340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:08.022652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:08.023108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:08.023139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:08.023157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:08.023395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:08.023648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:08.023672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:08.023687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:08.027264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:08.036560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:08.037029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:08.037059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:08.037076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:08.037314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:08.037569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:08.037592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:08.037608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:08.041182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:08.050474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:08.050934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:08.050965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:08.050983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:08.051221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:08.051475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:08.051499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:08.051514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:08.055089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:08.064376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:08.064836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:08.064867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:08.064884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:08.065122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:08.065364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:08.065387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:08.065402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.280 [2024-07-26 14:25:08.068990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.280 [2024-07-26 14:25:08.078284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.280 [2024-07-26 14:25:08.078710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.280 [2024-07-26 14:25:08.078741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.280 [2024-07-26 14:25:08.078759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.280 [2024-07-26 14:25:08.078996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.280 [2024-07-26 14:25:08.079239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.280 [2024-07-26 14:25:08.079262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.280 [2024-07-26 14:25:08.079277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.281 [2024-07-26 14:25:08.082868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.281 [2024-07-26 14:25:08.092166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.281 [2024-07-26 14:25:08.092573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.281 [2024-07-26 14:25:08.092604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.281 [2024-07-26 14:25:08.092621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.281 [2024-07-26 14:25:08.092865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.281 [2024-07-26 14:25:08.093108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.281 [2024-07-26 14:25:08.093132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.281 [2024-07-26 14:25:08.093148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.281 [2024-07-26 14:25:08.096733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.281 [2024-07-26 14:25:08.106058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.281 [2024-07-26 14:25:08.106513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.281 [2024-07-26 14:25:08.106544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.281 [2024-07-26 14:25:08.106562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.281 [2024-07-26 14:25:08.106800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.281 [2024-07-26 14:25:08.107042] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.281 [2024-07-26 14:25:08.107066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.281 [2024-07-26 14:25:08.107081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.281 [2024-07-26 14:25:08.110667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.281 [2024-07-26 14:25:08.119980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.281 [2024-07-26 14:25:08.120400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.281 [2024-07-26 14:25:08.120438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.281 [2024-07-26 14:25:08.120458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.281 [2024-07-26 14:25:08.120696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.281 [2024-07-26 14:25:08.120939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.281 [2024-07-26 14:25:08.120962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.281 [2024-07-26 14:25:08.120977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.281 [2024-07-26 14:25:08.124562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.281 [2024-07-26 14:25:08.133856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.281 [2024-07-26 14:25:08.134307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.281 [2024-07-26 14:25:08.134338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.281 [2024-07-26 14:25:08.134356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.281 [2024-07-26 14:25:08.134604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.281 [2024-07-26 14:25:08.134847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.281 [2024-07-26 14:25:08.134870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.281 [2024-07-26 14:25:08.134891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.281 [2024-07-26 14:25:08.138478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.281 [2024-07-26 14:25:08.147768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.281 [2024-07-26 14:25:08.148211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.281 [2024-07-26 14:25:08.148242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.281 [2024-07-26 14:25:08.148259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.281 [2024-07-26 14:25:08.148507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.281 [2024-07-26 14:25:08.148749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.281 [2024-07-26 14:25:08.148772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.281 [2024-07-26 14:25:08.148787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.281 [2024-07-26 14:25:08.152362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.281 [2024-07-26 14:25:08.161665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.281 [2024-07-26 14:25:08.162089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.281 [2024-07-26 14:25:08.162119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.281 [2024-07-26 14:25:08.162137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.281 [2024-07-26 14:25:08.162375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.281 [2024-07-26 14:25:08.162626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.281 [2024-07-26 14:25:08.162650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.281 [2024-07-26 14:25:08.162666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.540 [2024-07-26 14:25:08.166250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.540 [2024-07-26 14:25:08.175562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.540 [2024-07-26 14:25:08.175994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.540 [2024-07-26 14:25:08.176046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.540 [2024-07-26 14:25:08.176063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.540 [2024-07-26 14:25:08.176301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.540 [2024-07-26 14:25:08.176553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.540 [2024-07-26 14:25:08.176577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.540 [2024-07-26 14:25:08.176592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.540 [2024-07-26 14:25:08.180175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.540 [2024-07-26 14:25:08.189491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.540 [2024-07-26 14:25:08.189960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.540 [2024-07-26 14:25:08.190014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.540 [2024-07-26 14:25:08.190033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.540 [2024-07-26 14:25:08.190271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.540 [2024-07-26 14:25:08.190522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.540 [2024-07-26 14:25:08.190547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.540 [2024-07-26 14:25:08.190562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.540 [2024-07-26 14:25:08.194143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.540 [2024-07-26 14:25:08.203472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.540 [2024-07-26 14:25:08.203934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.540 [2024-07-26 14:25:08.203987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.540 [2024-07-26 14:25:08.204005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.540 [2024-07-26 14:25:08.204243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.540 [2024-07-26 14:25:08.204496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.540 [2024-07-26 14:25:08.204520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.540 [2024-07-26 14:25:08.204535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.540 [2024-07-26 14:25:08.208111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.540 [2024-07-26 14:25:08.217402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.540 [2024-07-26 14:25:08.217863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.540 [2024-07-26 14:25:08.217894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.540 [2024-07-26 14:25:08.217911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.540 [2024-07-26 14:25:08.218149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.540 [2024-07-26 14:25:08.218403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.540 [2024-07-26 14:25:08.218436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.540 [2024-07-26 14:25:08.218454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.540 [2024-07-26 14:25:08.222029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2639962 Killed "${NVMF_APP[@]}" "$@" 00:30:51.540 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:51.540 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:51.540 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.541 [2024-07-26 14:25:08.231311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.231793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.231824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.231842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.232080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.541 [2024-07-26 14:25:08.232322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.541 [2024-07-26 14:25:08.232345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.541 [2024-07-26 14:25:08.232360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2641010 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2641010 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2641010 ']' 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:51.541 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.541 [2024-07-26 14:25:08.235949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.541 [2024-07-26 14:25:08.245240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.245671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.245702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.245720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.245959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.541 [2024-07-26 14:25:08.246201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.541 [2024-07-26 14:25:08.246224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.541 [2024-07-26 14:25:08.246239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.541 [2024-07-26 14:25:08.249827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.541 [2024-07-26 14:25:08.259120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.259576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.259608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.259625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.259864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.541 [2024-07-26 14:25:08.260112] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.541 [2024-07-26 14:25:08.260135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.541 [2024-07-26 14:25:08.260151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.541 [2024-07-26 14:25:08.263738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.541 [2024-07-26 14:25:08.273036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.273492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.273524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.273542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.273781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.541 [2024-07-26 14:25:08.274024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.541 [2024-07-26 14:25:08.274048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.541 [2024-07-26 14:25:08.274063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.541 [2024-07-26 14:25:08.277651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.541 [2024-07-26 14:25:08.284377] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:30:51.541 [2024-07-26 14:25:08.284461] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.541 [2024-07-26 14:25:08.286952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.287361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.287392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.287410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.287656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.541 [2024-07-26 14:25:08.287898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.541 [2024-07-26 14:25:08.287922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.541 [2024-07-26 14:25:08.287937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.541 [2024-07-26 14:25:08.291521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.541 [2024-07-26 14:25:08.300812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.301269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.301319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.301337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.301585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.541 [2024-07-26 14:25:08.301834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.541 [2024-07-26 14:25:08.301858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.541 [2024-07-26 14:25:08.301874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.541 [2024-07-26 14:25:08.305461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.541 [2024-07-26 14:25:08.314922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.315367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.315420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.315448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.315689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.541 [2024-07-26 14:25:08.315932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.541 [2024-07-26 14:25:08.315955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.541 [2024-07-26 14:25:08.315970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.541 [2024-07-26 14:25:08.319570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.541 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.541 [2024-07-26 14:25:08.328861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.329255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.329304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.329321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.329570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.541 [2024-07-26 14:25:08.329813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.541 [2024-07-26 14:25:08.329836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.541 [2024-07-26 14:25:08.329851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.541 [2024-07-26 14:25:08.333435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.541 [2024-07-26 14:25:08.342734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.541 [2024-07-26 14:25:08.343177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.541 [2024-07-26 14:25:08.343224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.541 [2024-07-26 14:25:08.343241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.541 [2024-07-26 14:25:08.343489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.542 [2024-07-26 14:25:08.343731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.542 [2024-07-26 14:25:08.343755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.542 [2024-07-26 14:25:08.343770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.542 [2024-07-26 14:25:08.347357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.542 [2024-07-26 14:25:08.356657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.542 [2024-07-26 14:25:08.357107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.542 [2024-07-26 14:25:08.357154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.542 [2024-07-26 14:25:08.357172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.542 [2024-07-26 14:25:08.357409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.542 [2024-07-26 14:25:08.357659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.542 [2024-07-26 14:25:08.357683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.542 [2024-07-26 14:25:08.357698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.542 [2024-07-26 14:25:08.361272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.542 [2024-07-26 14:25:08.361516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:51.542 [2024-07-26 14:25:08.370593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.542 [2024-07-26 14:25:08.371135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.542 [2024-07-26 14:25:08.371193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.542 [2024-07-26 14:25:08.371215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.542 [2024-07-26 14:25:08.371471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.542 [2024-07-26 14:25:08.371719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.542 [2024-07-26 14:25:08.371742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.542 [2024-07-26 14:25:08.371760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.542 [2024-07-26 14:25:08.375344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.542 [2024-07-26 14:25:08.384661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.542 [2024-07-26 14:25:08.385148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.542 [2024-07-26 14:25:08.385201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.542 [2024-07-26 14:25:08.385220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.542 [2024-07-26 14:25:08.385474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.542 [2024-07-26 14:25:08.385719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.542 [2024-07-26 14:25:08.385742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.542 [2024-07-26 14:25:08.385758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.542 [2024-07-26 14:25:08.389331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.542 [2024-07-26 14:25:08.398638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.542 [2024-07-26 14:25:08.399056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.542 [2024-07-26 14:25:08.399116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.542 [2024-07-26 14:25:08.399136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.542 [2024-07-26 14:25:08.399376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.542 [2024-07-26 14:25:08.399630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.542 [2024-07-26 14:25:08.399655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.542 [2024-07-26 14:25:08.399672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.542 [2024-07-26 14:25:08.403248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.542 [2024-07-26 14:25:08.412545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.542 [2024-07-26 14:25:08.412979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.542 [2024-07-26 14:25:08.413010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.542 [2024-07-26 14:25:08.413028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.542 [2024-07-26 14:25:08.413267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.542 [2024-07-26 14:25:08.413520] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.542 [2024-07-26 14:25:08.413545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.542 [2024-07-26 14:25:08.413561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.542 [2024-07-26 14:25:08.417137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.801 [2024-07-26 14:25:08.426464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.801 [2024-07-26 14:25:08.426935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.801 [2024-07-26 14:25:08.426990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.801 [2024-07-26 14:25:08.427010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.801 [2024-07-26 14:25:08.427255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.801 [2024-07-26 14:25:08.427512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.801 [2024-07-26 14:25:08.427536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.801 [2024-07-26 14:25:08.427555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.801 [2024-07-26 14:25:08.431140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.801 [2024-07-26 14:25:08.440466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.801 [2024-07-26 14:25:08.440940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.801 [2024-07-26 14:25:08.440981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.801 [2024-07-26 14:25:08.441003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.801 [2024-07-26 14:25:08.441251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.801 [2024-07-26 14:25:08.441521] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.801 [2024-07-26 14:25:08.441546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.801 [2024-07-26 14:25:08.441564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.801 [2024-07-26 14:25:08.445141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.801 [2024-07-26 14:25:08.454434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.801 [2024-07-26 14:25:08.454883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.801 [2024-07-26 14:25:08.454934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.801 [2024-07-26 14:25:08.454952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.801 [2024-07-26 14:25:08.455191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.801 [2024-07-26 14:25:08.455444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.801 [2024-07-26 14:25:08.455468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.801 [2024-07-26 14:25:08.455483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.801 [2024-07-26 14:25:08.459057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.801 [2024-07-26 14:25:08.468346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.801 [2024-07-26 14:25:08.468761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.801 [2024-07-26 14:25:08.468809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.801 [2024-07-26 14:25:08.468827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.801 [2024-07-26 14:25:08.469066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.801 [2024-07-26 14:25:08.469309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.801 [2024-07-26 14:25:08.469333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.801 [2024-07-26 14:25:08.469348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.472935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.482225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.482695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.482744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.482763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.483002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.483244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.802 [2024-07-26 14:25:08.483268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.802 [2024-07-26 14:25:08.483284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.483394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.802 [2024-07-26 14:25:08.483439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.802 [2024-07-26 14:25:08.483459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.802 [2024-07-26 14:25:08.483473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.802 [2024-07-26 14:25:08.483486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.802 [2024-07-26 14:25:08.483544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.802 [2024-07-26 14:25:08.483600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.802 [2024-07-26 14:25:08.483604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.802 [2024-07-26 14:25:08.486877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.496199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.496766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.496812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.496835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.497087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.497336] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.802 [2024-07-26 14:25:08.497361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.802 [2024-07-26 14:25:08.497380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.500970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.510297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.510824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.510870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.510892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.511144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.511394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.802 [2024-07-26 14:25:08.511418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.802 [2024-07-26 14:25:08.511446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.515026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.524366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.524910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.524956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.524978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.525227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.525503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.802 [2024-07-26 14:25:08.525528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.802 [2024-07-26 14:25:08.525547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.529127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.538449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.539022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.539083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.539106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.539355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.539612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.802 [2024-07-26 14:25:08.539637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.802 [2024-07-26 14:25:08.539656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.543235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.552324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.552872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.552936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.552959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.553208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.553469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.802 [2024-07-26 14:25:08.553494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.802 [2024-07-26 14:25:08.553514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.557093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.566412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.566973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.567035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.567057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.567306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.567564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.802 [2024-07-26 14:25:08.567590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.802 [2024-07-26 14:25:08.567609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.571195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.580487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.580945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.580994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.581012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.581252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.581504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.802 [2024-07-26 14:25:08.581528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.802 [2024-07-26 14:25:08.581544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.802 [2024-07-26 14:25:08.585126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.802 [2024-07-26 14:25:08.594413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.802 [2024-07-26 14:25:08.594890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.802 [2024-07-26 14:25:08.594939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.802 [2024-07-26 14:25:08.594957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.802 [2024-07-26 14:25:08.595196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.802 [2024-07-26 14:25:08.595448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.803 [2024-07-26 14:25:08.595490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.803 [2024-07-26 14:25:08.595509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.803 [2024-07-26 14:25:08.599090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.803 [2024-07-26 14:25:08.608383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.803 [2024-07-26 14:25:08.608873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.803 [2024-07-26 14:25:08.608922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.803 [2024-07-26 14:25:08.608941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.803 [2024-07-26 14:25:08.609180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.803 [2024-07-26 14:25:08.609424] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.803 [2024-07-26 14:25:08.609468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.803 [2024-07-26 14:25:08.609484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.803 [2024-07-26 14:25:08.613067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.803 [2024-07-26 14:25:08.622383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.803 [2024-07-26 14:25:08.622835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.803 [2024-07-26 14:25:08.622884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.803 [2024-07-26 14:25:08.622903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.803 [2024-07-26 14:25:08.623141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.803 [2024-07-26 14:25:08.623385] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.803 [2024-07-26 14:25:08.623409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.803 [2024-07-26 14:25:08.623424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.803 [2024-07-26 14:25:08.627011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.803 [2024-07-26 14:25:08.630448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.803 [2024-07-26 14:25:08.636303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.803 [2024-07-26 14:25:08.636728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.803 [2024-07-26 14:25:08.636777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.803 [2024-07-26 14:25:08.636794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.803 [2024-07-26 14:25:08.637033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.803 [2024-07-26 14:25:08.637276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.803 [2024-07-26 14:25:08.637299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.803 [2024-07-26 14:25:08.637315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.803 [2024-07-26 14:25:08.640899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.803 [2024-07-26 14:25:08.650186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.803 [2024-07-26 14:25:08.650632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.803 [2024-07-26 14:25:08.650693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.803 [2024-07-26 14:25:08.650711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.803 [2024-07-26 14:25:08.650948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.803 [2024-07-26 14:25:08.651191] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.803 [2024-07-26 14:25:08.651214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.803 [2024-07-26 14:25:08.651238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.803 [2024-07-26 14:25:08.654825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.803 [2024-07-26 14:25:08.664126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.803 [2024-07-26 14:25:08.664586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.803 [2024-07-26 14:25:08.664641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.803 [2024-07-26 14:25:08.664662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.803 [2024-07-26 14:25:08.664906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.803 [2024-07-26 14:25:08.665151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.803 [2024-07-26 14:25:08.665175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.803 [2024-07-26 14:25:08.665192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.803 [2024-07-26 14:25:08.668780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.803 [2024-07-26 14:25:08.678116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.803 Malloc0 00:30:51.803 [2024-07-26 14:25:08.678685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.803 [2024-07-26 14:25:08.678740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:51.803 [2024-07-26 14:25:08.678761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:51.803 [2024-07-26 14:25:08.679010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.803 [2024-07-26 14:25:08.679258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.803 [2024-07-26 14:25:08.679284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.803 [2024-07-26 14:25:08.679302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.803 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.803 [2024-07-26 14:25:08.682902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.804 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.804 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.062 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.062 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:52.062 [2024-07-26 14:25:08.691993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.062 [2024-07-26 14:25:08.692444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.062 [2024-07-26 14:25:08.692492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a8540 with addr=10.0.0.2, port=4420 00:30:52.062 [2024-07-26 14:25:08.692512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8540 is same with the state(5) to be set 00:30:52.062 [2024-07-26 14:25:08.692752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a8540 (9): Bad file descriptor 00:30:52.062 [2024-07-26 14:25:08.692995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:52.062 [2024-07-26 14:25:08.693019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:52.062 [2024-07-26 14:25:08.693034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:52.062 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.062 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.062 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.062 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:52.062 [2024-07-26 14:25:08.696621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.062 [2024-07-26 14:25:08.698481] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.062 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.062 14:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2640247 00:30:52.062 [2024-07-26 14:25:08.705918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.062 [2024-07-26 14:25:08.776704] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:02.055 00:31:02.055 Latency(us) 00:31:02.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.055 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:02.055 Verification LBA range: start 0x0 length 0x4000 00:31:02.055 Nvme1n1 : 15.01 6017.58 23.51 8560.76 0.00 8753.80 916.29 20194.80 00:31:02.055 =================================================================================================================== 00:31:02.055 Total : 6017.58 23.51 8560.76 0.00 8753.80 916.29 20194.80 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:02.055 rmmod nvme_tcp 00:31:02.055 rmmod nvme_fabrics 00:31:02.055 rmmod nvme_keyring 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2641010 ']' 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2641010 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2641010 ']' 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2641010 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2641010 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2641010' 00:31:02.055 killing process with pid 2641010 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2641010 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2641010 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.055 14:25:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:03.961 00:31:03.961 real 0m23.541s 00:31:03.961 user 1m1.280s 00:31:03.961 sys 0m5.085s 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:03.961 ************************************ 00:31:03.961 END TEST nvmf_bdevperf 00:31:03.961 ************************************ 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.961 ************************************ 00:31:03.961 START TEST nvmf_target_disconnect 00:31:03.961 ************************************ 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:03.961 * Looking for test storage... 00:31:03.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:03.961 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:31:03.962 14:25:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:06.495 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:06.495 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:06.495 Found net devices under 0000:84:00.0: cvl_0_0 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:06.495 Found net devices under 0000:84:00.1: cvl_0_1 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.495 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:06.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:31:06.496 00:31:06.496 --- 10.0.0.2 ping statistics --- 00:31:06.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.496 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:31:06.496 00:31:06.496 --- 10.0.0.1 ping statistics --- 00:31:06.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.496 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:06.496 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:06.754 ************************************ 00:31:06.754 START TEST nvmf_target_disconnect_tc1 00:31:06.754 ************************************ 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:06.754 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:06.755 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.755 [2024-07-26 14:25:23.555571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.755 [2024-07-26 14:25:23.555649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1052790 with addr=10.0.0.2, port=4420 00:31:06.755 [2024-07-26 14:25:23.555687] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:06.755 [2024-07-26 14:25:23.555714] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:06.755 [2024-07-26 14:25:23.555729] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:06.755 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:06.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:06.755 Initializing NVMe Controllers 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:06.755 00:31:06.755 real 0m0.144s 00:31:06.755 user 0m0.059s 00:31:06.755 sys 0m0.084s 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:06.755 ************************************ 00:31:06.755 END TEST nvmf_target_disconnect_tc1 00:31:06.755 ************************************ 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:06.755 ************************************ 00:31:06.755 START TEST nvmf_target_disconnect_tc2 00:31:06.755 ************************************ 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:06.755 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2644200 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2644200 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2644200 ']' 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.014 14:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:07.014 [2024-07-26 14:25:23.743792] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:31:07.014 [2024-07-26 14:25:23.743963] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.014 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.014 [2024-07-26 14:25:23.891600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:07.273 [2024-07-26 14:25:24.086797] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.273 [2024-07-26 14:25:24.086911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.273 [2024-07-26 14:25:24.086948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.273 [2024-07-26 14:25:24.086978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.273 [2024-07-26 14:25:24.087003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.273 [2024-07-26 14:25:24.087169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:31:07.273 [2024-07-26 14:25:24.087255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:31:07.273 [2024-07-26 14:25:24.087309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:31:07.273 [2024-07-26 14:25:24.087313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.208 Malloc0 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.208 [2024-07-26 14:25:24.879744] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.208 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.209 [2024-07-26 14:25:24.908036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2644355 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.209 14:25:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:08.209 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.115 14:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2644200 00:31:10.115 14:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Read completed with error (sct=0, sc=8) 00:31:10.115 starting I/O failed 00:31:10.115 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 [2024-07-26 14:25:26.934291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 [2024-07-26 14:25:26.934916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 Write completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.116 [2024-07-26 14:25:26.935308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:10.116 Read completed with error (sct=0, sc=8) 00:31:10.116 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Write completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 Read completed with error (sct=0, sc=8) 00:31:10.117 starting I/O failed 00:31:10.117 [2024-07-26 14:25:26.935675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:10.117 [2024-07-26 14:25:26.935930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.936023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.936324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.936393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.936620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.936650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.936889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.936941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.937245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.937310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.937586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.937615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.937846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.937900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.938133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.938198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.938491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.938519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.938761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.938820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.939108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.939171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.939494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.939523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.939742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.939777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.940082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.940146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.940463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.940512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.940694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.940723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.940946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.941010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.941281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.941345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.941655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.941704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.941923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.941987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.117 [2024-07-26 14:25:26.942285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.117 [2024-07-26 14:25:26.942349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.117 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.942697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.942798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.943139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.943210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.943514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.943545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.943717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.943774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.943946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.944011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.944327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.944392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.944647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.944676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.944912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.944976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.945286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.945350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.945645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.945673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.945854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.945919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.946197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.946260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.946506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.946535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.946749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.946813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.947117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.947181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.947519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.947548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.947750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.947815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.948134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.948198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.948488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.948517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.948701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.948760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.949052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.949080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.949397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.949486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.949658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.949686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.949869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.949898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.950102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.950136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.950324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.950388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.950645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.950673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.950860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.950895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.951129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.951193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.951485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.951515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.951716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.951751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.952008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.952071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.118 [2024-07-26 14:25:26.952359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.118 [2024-07-26 14:25:26.952387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.118 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.952633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.952661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.952878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.952943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.953233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.953261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.953478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.953524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.953748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.953812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.954093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.954121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.954324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.954359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.954594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.954623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.954835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.954864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.955125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.955159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.955403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.955506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.955728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.955756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.956021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.956056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.956310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.956375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.956648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.956677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.956862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.956897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.957156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.957220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.957496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.957526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.957737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.957772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.958015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.958079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.958345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.958374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.958590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.958620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.958815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.958879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.959165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.959194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.959401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.959453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.959685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.959754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.960009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.960037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.960238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.960273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.960518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.960547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.960738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.960766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.960972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.961007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.961243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.119 [2024-07-26 14:25:26.961308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.119 qpair failed and we were unable to recover it. 00:31:10.119 [2024-07-26 14:25:26.961581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.961610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.961793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.961829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.962041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.962105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.962405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.962487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.962700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.962747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.963034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.963098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.963377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.963405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.963608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.963637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.963863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.963927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.964204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.964232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.964463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.964522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.964699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.964747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.965032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.965060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.965262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.965326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.965622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.965651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.965862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.965889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.966162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.966197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.966415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.966495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.966717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.966750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.967048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.967083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.967302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.967365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.967638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.967667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.967890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.967925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.968225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.968289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.968572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.968601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.968787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.968821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.969029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.969092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.969384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.969413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.969633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.969661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.969906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.969968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.970248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.970276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.970478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.970507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.970682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.970750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.120 qpair failed and we were unable to recover it. 00:31:10.120 [2024-07-26 14:25:26.971033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.120 [2024-07-26 14:25:26.971062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.971235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.971271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.971477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.971530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.971712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.971741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.971959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.971993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.972245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.972308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.972601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.972630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.972807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.972842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.973024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.973088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.973371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.973399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.973618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.973646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.973909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.973972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.974242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.974271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.974482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.974511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.974739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.974802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.975086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.975114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.975341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.975376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.975654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.975683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.975891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.975920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.976100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.976135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.976342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.976406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.976689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.976718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.976973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.977008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.977237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.977301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.977602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.977630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.977821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.121 [2024-07-26 14:25:26.977862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.121 qpair failed and we were unable to recover it. 00:31:10.121 [2024-07-26 14:25:26.978074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.978138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.978426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.978461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.978650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.978694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.978939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.979003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.979304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.979367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.979660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.979689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.979877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.979942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.980215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.980243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.980456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.980515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.980812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.980876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.981167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.981195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.981418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.981506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.981699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.981762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.982061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.982089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.982286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.982350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.982654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.982682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.982892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.982920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.983166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.983201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.983472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.983532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.983724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.983753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.983994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.984029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.984255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.984320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.984612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.984641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.984804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.984839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.985058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.985123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.985404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.985439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.985658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.985703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.985964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.986029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.986347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.986412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.986688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.986717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.986972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.987036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.987331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.987359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.122 qpair failed and we were unable to recover it. 00:31:10.122 [2024-07-26 14:25:26.987575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.122 [2024-07-26 14:25:26.987611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.987822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.987886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.988197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.988226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.988503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.988538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.988771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.988835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.989112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.989140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.989370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.989405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.989630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.989663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.989907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.989936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.990124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.990159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.990392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.990486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.990673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.990701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.990900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.990935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.991141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.991204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.991473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.991503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.991698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.991734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.991942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.992006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.992308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.992336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.992632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.992661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.992939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.992972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.993192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.993221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.993447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.993509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.993733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.993762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.994085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.994148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.994405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.994477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.994729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.994792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.995051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.995080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.995296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.995329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.995536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.995565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.995780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.995808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.996151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.996214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.996477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.996527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.996740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.996769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.123 qpair failed and we were unable to recover it. 00:31:10.123 [2024-07-26 14:25:26.997020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.123 [2024-07-26 14:25:26.997055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.124 qpair failed and we were unable to recover it. 00:31:10.124 [2024-07-26 14:25:26.997267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.124 [2024-07-26 14:25:26.997333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.124 qpair failed and we were unable to recover it. 00:31:10.124 [2024-07-26 14:25:26.997620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.124 [2024-07-26 14:25:26.997649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.124 qpair failed and we were unable to recover it. 00:31:10.124 [2024-07-26 14:25:26.997879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.124 [2024-07-26 14:25:26.997930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.124 qpair failed and we were unable to recover it. 00:31:10.398 [2024-07-26 14:25:26.998199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.398 [2024-07-26 14:25:26.998264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.398 qpair failed and we were unable to recover it. 00:31:10.398 [2024-07-26 14:25:26.998546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.398 [2024-07-26 14:25:26.998575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.398 qpair failed and we were unable to recover it. 00:31:10.398 [2024-07-26 14:25:26.998766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.398 [2024-07-26 14:25:26.998801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.398 qpair failed and we were unable to recover it. 00:31:10.398 [2024-07-26 14:25:26.999042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.398 [2024-07-26 14:25:26.999075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.398 qpair failed and we were unable to recover it. 00:31:10.398 [2024-07-26 14:25:26.999279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:26.999307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:26.999493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:26.999538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:26.999702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:26.999775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.000059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.000088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.000389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.000423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.000667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.000722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.000976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.001010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.001166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.001202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.001451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.001520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.001736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.001765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.002025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.002060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.002323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.002387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.002668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.002697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.002988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.003023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.003297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.003362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.003708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.003767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.004051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.004086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.004342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.004405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.004659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.004688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.004908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.004943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.005219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.005284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.005543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.005572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.005749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.005784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.005987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.006051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.006324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.006353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.006561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.006590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.006839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.006904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.007198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.007226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.007459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.007495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.007757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.007822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.008110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.008139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.008325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.008361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.008576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.008604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.399 [2024-07-26 14:25:27.008809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.399 [2024-07-26 14:25:27.008838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.399 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.009078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.009113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.009388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.009480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.009689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.009717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.010035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.010098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.010380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.010470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.010678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.010706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.010906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.010941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.011176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.011240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.011522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.011551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.011735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.011764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.012016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.012080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.012355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.012383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.012710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.012766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.013021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.013066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.013292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.013320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.013517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.013551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.013734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.013809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.014090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.014118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.014380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.014415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.014616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.014645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.014810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.014838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.015057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.015091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.015330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.015393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.015698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.015726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.016011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.016045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.016321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.016385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.016676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.016705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.016872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.016906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.017133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.017198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.017498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.017528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.017709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.017743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.017934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.017998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.400 qpair failed and we were unable to recover it. 00:31:10.400 [2024-07-26 14:25:27.018279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.400 [2024-07-26 14:25:27.018308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.018532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.018562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.018753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.018817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.019068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.019096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.019327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.019392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.019692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.019721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.020006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.020035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.020234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.020269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.020512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.020561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.020769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.020798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.020990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.021025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.021245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.021308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.021585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.021613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.021808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.021843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.022068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.022132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.022397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.022474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.022709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.022737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.022992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.023027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.023259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.023322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.023615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.023644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.023832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.023865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.024070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.024105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.024347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.024411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.024695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.024755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.025007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.025035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.025197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.025232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.025453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.025504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.025703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.025768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.026056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.026085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.026336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.026370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.026613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.026642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.026838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.026902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.027191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.027219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.027404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.401 [2024-07-26 14:25:27.027450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.401 qpair failed and we were unable to recover it. 00:31:10.401 [2024-07-26 14:25:27.027688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.027768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.028060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.028123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.028377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.028405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.028588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.028616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.028839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.028903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.029163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.029227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.029476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.029505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.029708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.029743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.029966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.030030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.030313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.030377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.030649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.030677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.030922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.030983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.031311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.031374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.031678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.031707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.031966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.031995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.032175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.032210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.032414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.032502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.032719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.032791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.033047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.033075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.033268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.033304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.033506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.033535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.033758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.033822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.034081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.034110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.034365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.034399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.034754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.034818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.035071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.035135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.035390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.035423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.035648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.035694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.035961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.036025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.036305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.036370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.036638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.036666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.036878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.036913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.037199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.402 [2024-07-26 14:25:27.037262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.402 qpair failed and we were unable to recover it. 00:31:10.402 [2024-07-26 14:25:27.037553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.037582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.037742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.037770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.037957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.037992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.038208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.038272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.038520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.038550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.038759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.038787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.038998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.039033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.039265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.039330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.039592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.039620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.039825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.039853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.040026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.040061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.040298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.040362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.040638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.040667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.040918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.040987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.041273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.041308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.041570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.041599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.041805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.041869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.042149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.042178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.042377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.042413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.042665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.042729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.043024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.043089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.043354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.043382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.043573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.043602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.043821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.043885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.044167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.044231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.044522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.044553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.044750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.044785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.045030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.045094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.045346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.045410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.045687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.045716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.046002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.046037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.403 [2024-07-26 14:25:27.046322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.403 [2024-07-26 14:25:27.046386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.403 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.046665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.046694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.046981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.047014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.047205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.047240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.047475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.047529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.047764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.047828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.048102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.048130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.048310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.048345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.048575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.048604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.048839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.048903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.049192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.049220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.049444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.049490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.049700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.049778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.050064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.050128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.050407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.050449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.050682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.050717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.051002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.051067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.051332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.051394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.051658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.051686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.051883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.051918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.052151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.052214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.052512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.052541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.052755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.052783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.053052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.053087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.053349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.053412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.053674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.053702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.053993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.054021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.054185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.054220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.054479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.054533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.404 [2024-07-26 14:25:27.054729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.404 [2024-07-26 14:25:27.054801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.404 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.055081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.055110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.055282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.055317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.055546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.055575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.055736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.055801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.056076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.056104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.056304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.056339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.056533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.056562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.056757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.056820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.057128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.057156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.057433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.057481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.057641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.057669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.057887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.057950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.058229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.058263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.058444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.058480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.058646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.058674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.058960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.059024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.059310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.059338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.059522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.059551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.059745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.059810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.060064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.060128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.060412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.060448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.060683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.060717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.060989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.061053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.061339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.061403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.061673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.061701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.061928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.061963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.062219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.062284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.062542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.062571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.062769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.063024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.063059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.063309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.063372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.063673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.063702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.063916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.405 [2024-07-26 14:25:27.063945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.405 qpair failed and we were unable to recover it. 00:31:10.405 [2024-07-26 14:25:27.064138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.064173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.064370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.064446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.064677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.064705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.064986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.065014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.065236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.065271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.065527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.065556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.065782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.065848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.066101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.066130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.066310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.066344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.066535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.066563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.066762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.066825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.067075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.067103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.067315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.067350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.067552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.067580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.067808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.067872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.068119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.068147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.068335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.068370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.068527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.068555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.068746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.068808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.069064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.069092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.069272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.069336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.069600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.069629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.069827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.069891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.070179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.070207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.070381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.070459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.070721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.070785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.071086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.071150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.071451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.071479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.071700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.071734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.072036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.072101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.072381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.072456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.072694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.072722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.072947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.406 [2024-07-26 14:25:27.072981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.406 qpair failed and we were unable to recover it. 00:31:10.406 [2024-07-26 14:25:27.073242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.073305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.073578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.073607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.073831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.073859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.074122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.074157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.074411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.074502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.074754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.074819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.075073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.075101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.075321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.075356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.075609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.075637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.075862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.075926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.076202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.076230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.076421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.076464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.076702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.076767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.077034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.077108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.077370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.077398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.077608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.077637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.077837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.077901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.078182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.078245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.078532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.078561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.078774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.078809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.079062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.079126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.079411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.079485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.079733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.079762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.080030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.080065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.080278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.080341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.080635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.080664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.080828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.080856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.081076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.081112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.081312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.081375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.081625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.081653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.081866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.081894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.082138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.082172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.082370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.407 [2024-07-26 14:25:27.082479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.407 qpair failed and we were unable to recover it. 00:31:10.407 [2024-07-26 14:25:27.082697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.082725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.082937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.082965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.083206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.083241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.083473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.083529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.083754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.083818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.084101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.084129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.084349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.084384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.084607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.084636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.084845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.084909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.085192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.085220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.085446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.085493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.085724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.085788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.086070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.086134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.086414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.086458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.086631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.086659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.086910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.086973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.087264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.087327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.087591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.087620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.087836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.087871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.088157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.088221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.088509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.088543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.088734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.088763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.088972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.089007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.089202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.089266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.089562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.089591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.089721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.089749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.089929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.089964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.090159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.090223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.090478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.090524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.090710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.090738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.090921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.090956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.091176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.091240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.091522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.408 [2024-07-26 14:25:27.091551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.408 qpair failed and we were unable to recover it. 00:31:10.408 [2024-07-26 14:25:27.091756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.091785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.092069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.092103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.092389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.092467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.092694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.092752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.093037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.093065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.093273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.093308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.093552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.093619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.093908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.093972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.094261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.094289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.094523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.094559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.094770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.094834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.095122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.095187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.095434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.095463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.095656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.095691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.095893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.095958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.096214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.096278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.096531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.096560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.096748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.096783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.097012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.097076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.097363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.097445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.097659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.097687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.097880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.097914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.098107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.098171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.098476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.098530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.098711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.098740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.098950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.098985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.099198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.099262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.099534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.409 [2024-07-26 14:25:27.099568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.409 qpair failed and we were unable to recover it. 00:31:10.409 [2024-07-26 14:25:27.099785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.099814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.100070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.100105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.100334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.100398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.100651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.100679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.100894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.100922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.101093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.101128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.101362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.101426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.101693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.101738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.101992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.102020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.102245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.102280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.102523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.102552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.102770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.102833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.103095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.103123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.103306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.103342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.103557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.103586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.103777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.103841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.104121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.104150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.104341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.104376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.104547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.104576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.104796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.104860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.105142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.105170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.105404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.105445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.105657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.105685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.105935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.105999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.106254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.106318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.106616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.106645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.106847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.106912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.107196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.107260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.107496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.107536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.107754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.107790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.108088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.108151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.108443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.108523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.108739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.410 [2024-07-26 14:25:27.108768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.410 qpair failed and we were unable to recover it. 00:31:10.410 [2024-07-26 14:25:27.109025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.109060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.109272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.109336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.109608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.109637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.109820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.109848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.110069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.110104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.110333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.110397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.110670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.110704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.110982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.111011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.111208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.111242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.111425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.111515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.111750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.111815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.112096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.112124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.112335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.112370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.112571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.112600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.112828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.112892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.113176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.113204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.113450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.113496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.113719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.113784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.114035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.114099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.114358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.114387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.114622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.114652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.114863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.114928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.115210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.115274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.115527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.115557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.115781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.115817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.116048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.116111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.116365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.116442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.116710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.116739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.116978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.117013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.117215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.117279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.117564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.117593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.117773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.117802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.118012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.118047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.411 qpair failed and we were unable to recover it. 00:31:10.411 [2024-07-26 14:25:27.118304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.411 [2024-07-26 14:25:27.118368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.118675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.118704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.118954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.118982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.119202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.119237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.119507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.119537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.119774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.119839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.120116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.120144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.120368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.120403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.120649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.120677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.120923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.120987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.121270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.121298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.121501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.121530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.121750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.121814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.122063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.122139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.122425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.122498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.122689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.122735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.122952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.123016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.123273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.123336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.123591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.123620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.123804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.123839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.124049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.124112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.124366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.124443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.124673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.124702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.124924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.124958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.125170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.125234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.125509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.125538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.125719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.125748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.125940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.125975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.126214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.126277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.126514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.126543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.126759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.126788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.127058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.127092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.127318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.412 [2024-07-26 14:25:27.127382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.412 qpair failed and we were unable to recover it. 00:31:10.412 [2024-07-26 14:25:27.127687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.127748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.128032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.128061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.128306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.128341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.128594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.128623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.128860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.128924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.129203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.129231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.129445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.129494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.129726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.129790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.130089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.130154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.130452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.130481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.130699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.130733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.130967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.131030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.131295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.131359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.131648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.131677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.131900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.131935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.132198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.132261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.132532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.132561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.132787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.132815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.133080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.133114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.133366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.133447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.133682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.133715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.134016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.134045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.134241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.134275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.134528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.134556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.134708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.134775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.135035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.135064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.135227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.135262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.135489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.135517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.135734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.135798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.136061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.136090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.136290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.136324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.136527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.136557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.136750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.413 [2024-07-26 14:25:27.136793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.413 qpair failed and we were unable to recover it. 00:31:10.413 [2024-07-26 14:25:27.137083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.137112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.137332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.137367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.137551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.137580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.137776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.137841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.138121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.138149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.138396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.138449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.138698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.138762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.139050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.139114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.139388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.139479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.139669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.139697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.139894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.139957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.140244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.140307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.140586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.140614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.140809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.140843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.141091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.141155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.141444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.141473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.141681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.141709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.141995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.142030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.142281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.142344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.142632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.142660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.142868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.142896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.143141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.143175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.143356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.143420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.143699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.143727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.144013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.144041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.144282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.144316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.144543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.144572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.144753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.414 [2024-07-26 14:25:27.144786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.414 qpair failed and we were unable to recover it. 00:31:10.414 [2024-07-26 14:25:27.144973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.145001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.145216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.145251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.145488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.145517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.145731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.145759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.146032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.146060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.146256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.146291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.146516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.146545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.146750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.146778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.146991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.147019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.147201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.147234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.147475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.147505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.147719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.147747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.148014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.148042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.148292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.148326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.148534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.148562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.148781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.148810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.149092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.149120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.149330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.149364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.149612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.149641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.149823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.149851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.150056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.150084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.150366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.150400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.150693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.150722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.150957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.150985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.151207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.151235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.151534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.151564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.151807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.151871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.152115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.152143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.152310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.152338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.152546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.152575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.152745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.152806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.153087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.153115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.153349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.415 [2024-07-26 14:25:27.153412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.415 qpair failed and we were unable to recover it. 00:31:10.415 [2024-07-26 14:25:27.153681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.153710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.153932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.153996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.154249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.154277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.154467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.154496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.154695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.154730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.154970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.155034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.155324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.155357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.155617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.155646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.155865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.155900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.156188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.156252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.156529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.156558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.156775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.156803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.157102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.157137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.157412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.157504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.157720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.157748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.158049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.158077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.158362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.158397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.158714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.158792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.159093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.159121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.159280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.159308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.159535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.159564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.159750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.159814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.160098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.160127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.160347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.160410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.160667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.160695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.160901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.160966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.161187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.161215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.161423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.161458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.161674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.161724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.162004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.162067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.162348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.162376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.162595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.162624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.162806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.162841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.416 qpair failed and we were unable to recover it. 00:31:10.416 [2024-07-26 14:25:27.163090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.416 [2024-07-26 14:25:27.163155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.163441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.163470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.163688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.163716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.164010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.164045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.164293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.164356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.164657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.164685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.164870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.164898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.165059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.165094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.165333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.165396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.165661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.165690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.165907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.165935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.166217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.166252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.166513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.166753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.166786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.167063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.167091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.167295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.167330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.167549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.167577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.167808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.167836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.168127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.168155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.168324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.168358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.168512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.168541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.168723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.168750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.168956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.168983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.169272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.169305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.169534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.169563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.169736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.169764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.169972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.169999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.170271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.170305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.170506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.170534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.170746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.170774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.171053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.171080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.171240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.171274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.171518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.417 [2024-07-26 14:25:27.171547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.417 qpair failed and we were unable to recover it. 00:31:10.417 [2024-07-26 14:25:27.171728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.171757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.171931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.171959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.172166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.172201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.172409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.172487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.172722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.172750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.173028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.173056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.173221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.173285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.173572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.173601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.173790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.173818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.174041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.174070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.174367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.174446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.174646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.174674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.174890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.174918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.175160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.175188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.175373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.175452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.175693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.175721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.176025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.176053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.176249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.176277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.176519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.176548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.176726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.176789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.177085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.177119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.177267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.177296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.177452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.177500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.177678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.177754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.178052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.178080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.178292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.178320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.178615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.178644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.178854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.178918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.179244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.179272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.179570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.179599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.179770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.179805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.180059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.180123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.180379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.418 [2024-07-26 14:25:27.180407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.418 qpair failed and we were unable to recover it. 00:31:10.418 [2024-07-26 14:25:27.180714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.180778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.181046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.181081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.181289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.181354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.181690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.181738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.182008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.182037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.182239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.182274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.182519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.182584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.182865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.182893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.183076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.183104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.183300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.183335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.183557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.183586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.183762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.183790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.183969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.183997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.184210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.184243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.184518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.184547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.184738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.184765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.184960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.184987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.185173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.185207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.185510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.185542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.185765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.185794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.186086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.186114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.186689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.186738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.187039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.187104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.187406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.187443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.187646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.187675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.187946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.187980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.188201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.188265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.188546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.188581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.419 qpair failed and we were unable to recover it. 00:31:10.419 [2024-07-26 14:25:27.188797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.419 [2024-07-26 14:25:27.188826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.189118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.189154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.189368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.189445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.189691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.189719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.190005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.190034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.190277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.190311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.190539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.190569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.190756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.190784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.190952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.190981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.191188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.191223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.191403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.191494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.191731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.191759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.191996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.192025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.192259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.192293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.192574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.192603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.192748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.192776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.192958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.192985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.193187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.193220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.193464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.193511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.193723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.193751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.193994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.194023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.194236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.194271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.194525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.194554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.194733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.194762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.194946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.194974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.195166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.195201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.195443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.195498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.195695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.195724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.195915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.195943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.196131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.196164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.196415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.196493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.196726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.196755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.196966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.196995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.420 qpair failed and we were unable to recover it. 00:31:10.420 [2024-07-26 14:25:27.197228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.420 [2024-07-26 14:25:27.197263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.197489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.197518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.197720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.197748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.197986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.198014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.198227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.198262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.198512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.198541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.198728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.198761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.198945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.198973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.199199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.199234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.199467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.199521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.199739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.199766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.200041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.200068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.200270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.200304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.200530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.200558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.200750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.200778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.200970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.200999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.201146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.201180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.201386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.201481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.201697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.201725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.201931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.201958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.202169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.202202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.202457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.202501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.202718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.202746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.202993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.203021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.203241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.203275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.203513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.203548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.203760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.203788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.204069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.204098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.204387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.204465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.204757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.204822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.205096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.205124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.205296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.205325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.205517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.205546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.205747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.421 [2024-07-26 14:25:27.205812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.421 qpair failed and we were unable to recover it. 00:31:10.421 [2024-07-26 14:25:27.206106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.206134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.206328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.206356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.206539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.206568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.206797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.206861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.207155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.207183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.207444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.207473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.207647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.207694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.207894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.207957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.208216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.208244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.208490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.208518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.208727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.208773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.209043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.209108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.209401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.209435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.209717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.209782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.210040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.210075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.210324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.210387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.210683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.210712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.211019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.211047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.211315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.211379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.211653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.211682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.211890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.211918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.212088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.212116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.212302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.212337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.212524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.212553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.212730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.212758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.212936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.212963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.213152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.213187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.213419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.213495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.213705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.213733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.214046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.214112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.214399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.214440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.214671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.214730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.422 [2024-07-26 14:25:27.215026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.422 [2024-07-26 14:25:27.215055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.422 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.215313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.215376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.215658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.215687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.215916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.215979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.216267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.216295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.216502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.216531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.216760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.216794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.217069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.217143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.217432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.217462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.217678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.217706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.217992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.218026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.218269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.218332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.218607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.218636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.218838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.218867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.219145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.219179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.219414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.219505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.219724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.219752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.220021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.220049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.220259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.220293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.220501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.220530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.220713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.220741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.220930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.220959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.221138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.221173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.221412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.221505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.221685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.221714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.221869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.221897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.222122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.222156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.222383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.222473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.222686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.222714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.222892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.222920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.223147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.223182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.223447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.223500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.223692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.223720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.223899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-26 14:25:27.223927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.423 qpair failed and we were unable to recover it. 00:31:10.423 [2024-07-26 14:25:27.224135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.224170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.224373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.224467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.224712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.224740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.225015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.225043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.225242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.225276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.225507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.225536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.225725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.225754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.225894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.225922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.226144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.226178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.226409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.226488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.226689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.226717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.226909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.226936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.227129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.227162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.227376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.227483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.227692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.227721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.227974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.228003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.228220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.228255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.228487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.228516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.228693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.228721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.228860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.228887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.229099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.229134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.229359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.229423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.229697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.229726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.229979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.230007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.230174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.230208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.230459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.230521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.230699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.230728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.230916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.230944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.231198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.231233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.231461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.231510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.231722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.231751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.232023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.232051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.424 qpair failed and we were unable to recover it. 00:31:10.424 [2024-07-26 14:25:27.232309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.424 [2024-07-26 14:25:27.232343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.232585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.232615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.232756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.232785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.233010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.233057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.233337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.233372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.233604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.233633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.233945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.234008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.234275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.234339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.234619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.234649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.234876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.234940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.235229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.235258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.235448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.235476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.235700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.235735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.235945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.236008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.236223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.236251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.236457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.236486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.236666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.236712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.236943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.237007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.237281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.237309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.237477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.237506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.237699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.237733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.237928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.238002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.238260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.238288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.238433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.238463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.238685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.238719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.238948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.239012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.239293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.239321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.239532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.239562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.239772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.239807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.425 qpair failed and we were unable to recover it. 00:31:10.425 [2024-07-26 14:25:27.240057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.425 [2024-07-26 14:25:27.240120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.240396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.240425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.240615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.240643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.240890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.240925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.241178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.241243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.241536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.241564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.241783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.241811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.241998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.242033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.242276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.242340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.242631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.242660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.242870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.242899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.243076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.243105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.243287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.243316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.243486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.243515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.243734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.243797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.244058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.244086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.244281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.244309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.244509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.244538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.244732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.244795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.245101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.245130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.245323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.245351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.245594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.245623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.245814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.245879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.246161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.246190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.246333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.246361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.246567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.246596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.246811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.246874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.247162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.247190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.247375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.247404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.247588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.247617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.247826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.247890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.248178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.426 [2024-07-26 14:25:27.248206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.426 qpair failed and we were unable to recover it. 00:31:10.426 [2024-07-26 14:25:27.248475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.248508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.248726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.248754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.248965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.249026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.249329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.249358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.249753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.249821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.250120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.250155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.250482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.250544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.250741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.250769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.250985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.251014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.251305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.251340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.251622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.251653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.251847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.251876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.252022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.252050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.252271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.252307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.252576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.252605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.252809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.252838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.253038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.253067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.253263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.253297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.253498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.253527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.253744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.253773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.254051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.254079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.254304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.254339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.254572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.254601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.254809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.254837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.255067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.255096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.255312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.255346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.255598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.255628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.255815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.255844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.256055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.256083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.256380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.256415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.256724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.256789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.257065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.427 [2024-07-26 14:25:27.257094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.427 qpair failed and we were unable to recover it. 00:31:10.427 [2024-07-26 14:25:27.257270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.257298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.257512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.257542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.257757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.257821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.258083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.258112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.258321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.258350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.258665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.258694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.258965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.259029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.259322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.259351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.259567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.259601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.259794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.259830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.260085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.260149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.260436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.260464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.260615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.260644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.260858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.260894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.261101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.261164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.261483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.261512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.261678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.261706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.261898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.261933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.262182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.262246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.262524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.262552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.262739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.262767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.262947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.262982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.263213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.263276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.263532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.263562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.263749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.263778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.263942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.263977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.264214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.264278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.264528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.264557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.264765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.264794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.264995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.265028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.265186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.265218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.265408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.265443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.265653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.428 [2024-07-26 14:25:27.265681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.428 qpair failed and we were unable to recover it. 00:31:10.428 [2024-07-26 14:25:27.265961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.265994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.266233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.266295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.266561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.266590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.266774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.266803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.267028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.267061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.267264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.267297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.267510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.267538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.267681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.267709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.267873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.267901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.268163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.268196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.268450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.268514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.268714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.268743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.268925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.268954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.269161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.269190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.269416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.269451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.269627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.269661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.429 [2024-07-26 14:25:27.269873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.429 [2024-07-26 14:25:27.269935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.429 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.270186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.270214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.270441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.270493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.270676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.270741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.271036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.271100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.271350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.271379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.271598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.271627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.271846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.271879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.272070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.272103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.272319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.272347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.272536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.272565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.272743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.272775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.273020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.273084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.273371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.705 [2024-07-26 14:25:27.273400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.705 qpair failed and we were unable to recover it. 00:31:10.705 [2024-07-26 14:25:27.273624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.273653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.273883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.273916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.274107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.274141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.274361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.274389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.274571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.274600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.274757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.274819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.275009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.275042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.275236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.275264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.275457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.275504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.275683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.275753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.276008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.276072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.276350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.276378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.276642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.276671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.276850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.276914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.277186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.277249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.277476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.277505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.277718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.277753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.277997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.278060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.278342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.278406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.278700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.278728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.279022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.279057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.279296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.279359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.279668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.279696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.279866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.279894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.280113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.280147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.280380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.280471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.280707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.280783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.281049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.281077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.281270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.281305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.281506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.281535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.281716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.281744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.281996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.282024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.282170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.706 [2024-07-26 14:25:27.282204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.706 qpair failed and we were unable to recover it. 00:31:10.706 [2024-07-26 14:25:27.282397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.282474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.282693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.282748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.283018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.283046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.283221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.283255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.283489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.283540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.283746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.283808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.284115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.284144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.284377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.284412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.284642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.284670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.284941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.285004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.285285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.285314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.285586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.285615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.285849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.285913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.286169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.286232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.286490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.286520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.286698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.286733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.286939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.287003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.287284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.287348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.287617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.287646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.287832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.287868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.288147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.288210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.288511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.288540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.288759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.288788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.289014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.289049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.289230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.289294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.289576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.289605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.289801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.289829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.290025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.290059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.290306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.290369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.290669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.290698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.290991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.291019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.291260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.291295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.291510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.291544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.707 qpair failed and we were unable to recover it. 00:31:10.707 [2024-07-26 14:25:27.291765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.707 [2024-07-26 14:25:27.291829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.292092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.292120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.292329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.292364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.292596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.292624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.292846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.292909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.293191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.293219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.293480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.293508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.293744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.293807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.294106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.294170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.294455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.294483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.294662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.294713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.294946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.295010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.295299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.295362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.295662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.295691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.295945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.295979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.296207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.296270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.296541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.296571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.296752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.296780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.297008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.297042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.297251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.297319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.297614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.297643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.297862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.297890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.298146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.298180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.298411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.298487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.298744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.298808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.299062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.299090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.299294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.299329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.299591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.299620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.299783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.299847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.300123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.300151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.300347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.300382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.300622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.300651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.300876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.300938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.708 [2024-07-26 14:25:27.301201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.708 [2024-07-26 14:25:27.301229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.708 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.301398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.301440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.301642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.301670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.301884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.301948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.302227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.302255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.302475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.302524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.302684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.302763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.303046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.303109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.303392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.303484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.303709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.303737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.303976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.304038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.304292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.304355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.304586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.304615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.304801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.304835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.305060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.305122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.305419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.305504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.305707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.305734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.305945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.305980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.306223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.306285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.306588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.306617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.306790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.306819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.306988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.307023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.307236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.307298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.307559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.307588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.307771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.307799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.308013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.308048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.308240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.308302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.308583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.308612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.308775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.308803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.308959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.308993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.309197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.309259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.309530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.309559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.309771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.309800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.709 [2024-07-26 14:25:27.310056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.709 [2024-07-26 14:25:27.310091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.709 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.310334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.310397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.310726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.310791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.311070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.311098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.311285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.311319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.311549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.311578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.311721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.311796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.312036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.312064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.312297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.312331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.312592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.312621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.312856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.312920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.313170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.313198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.313408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.313450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.313649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.313682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.313895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.313959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.314212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.314241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.314477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.314512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.314739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.314802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.315025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.315088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.315373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.315401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.315567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.315595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.315824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.315887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.316168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.316232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.316492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.316531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.316691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.710 [2024-07-26 14:25:27.316726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.710 qpair failed and we were unable to recover it. 00:31:10.710 [2024-07-26 14:25:27.316912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.316966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.317154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.317187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.317390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.317418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.317607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.317635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.317859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.317892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.318083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.318116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.318307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.318334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.318550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.318579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.318799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.318862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.319116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.319166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.319449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.319478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.319633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.319661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.319891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.319953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.320247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.320310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.320602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.320631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.320839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.320875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.321162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.321226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.321507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.321536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.321749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.321778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.322062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.322097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.322319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.322383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.322683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.322711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.323007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.323035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.323279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.323313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.323567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.323596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.323819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.323882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.324154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.324182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.324395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.324436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.324669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.324701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.711 [2024-07-26 14:25:27.324885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.711 [2024-07-26 14:25:27.324949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.711 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.325254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.325283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.325487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.325522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.325713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.325777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.326075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.326139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.326451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.326480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.326663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.326715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.326970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.327034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.327319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.327383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.327682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.327711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.327904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.327938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.328169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.328232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.328511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.328540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.328752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.328780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.329030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.329065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.329292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.329354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.329647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.329676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.329888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.329917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.330211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.330245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.330521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.330549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.330778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.330843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.331096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.331124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.331331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.331366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.331576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.331604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.331803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.331866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.332152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.332180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.332455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.332505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.332731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.332795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.333076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.333140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.333416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.333451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.333638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.333667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.333915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.333979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.334263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.334326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.712 [2024-07-26 14:25:27.334586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.712 [2024-07-26 14:25:27.334614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.712 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.334813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.334848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.335091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.335154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.335453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.335534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.335725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.335753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.335943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.335977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.336180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.336259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.336520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.336548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.336746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.336774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.336970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.337005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.337231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.337295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.337578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.337607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.337787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.337815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.337997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.338031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.338243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.338306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.338547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.338575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.338791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.338819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.339074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.339108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.339353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.339417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.339703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.339780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.340107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.340136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.340440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.340489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.340670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.340726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.341016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.341044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.341247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.341276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.341551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.341580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.341801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.341864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.342122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.342150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.342333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.713 [2024-07-26 14:25:27.342361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.713 qpair failed and we were unable to recover it. 00:31:10.713 [2024-07-26 14:25:27.342619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.342647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.342839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.342903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.343188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.343216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.343409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.343450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.343624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.343655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.343876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.343939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.344194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.344222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.344477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.344506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.344703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.344750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.345032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.345095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.345373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.345401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.345598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.345626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.345800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.345835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.346047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.346110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.346378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.346406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.346620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.346666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.346877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.346907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.347111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.347159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.347393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.347452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.347674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.347702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.347916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.347943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.348133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.348180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.348363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.348415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.348629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.348657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.348883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.348928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.349112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.349162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.349379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.349407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.349643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.349726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.350034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.350071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.350305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.350370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.350630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.350660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.350909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.350938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.351133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.714 [2024-07-26 14:25:27.351168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.714 qpair failed and we were unable to recover it. 00:31:10.714 [2024-07-26 14:25:27.351406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.351496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.351731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.351784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.352086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.352150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.352447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.352494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.352661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.352735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.352997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.353061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.353353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.353417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.353678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.353706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.353963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.354027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.354308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.354371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.354662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.354691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.354931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.354972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.355263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.355328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.355621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.355650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.355837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.355865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.356086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.356121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.356347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.356410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.356697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.356732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.357018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.357046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.357235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.357270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.357499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.357529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.357710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.357766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.357981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.358009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.358211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.358246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.358501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.358547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.358783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.358847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.359146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.359210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.359514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.359543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.359771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.359834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.715 [2024-07-26 14:25:27.360107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.715 [2024-07-26 14:25:27.360171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.715 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.360450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.360516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.360728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.360763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.361040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.361104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.361382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.361463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.361672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.361700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.361885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.361920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.362117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.362180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.362474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.362526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.362750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.362779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.363041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.363076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.363319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.363383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.363658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.363687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.363932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.363960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.364172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.364207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.364445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.364517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.364708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.364785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.365072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.365101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.365308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.365343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.365568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.365597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.365824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.365889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.366171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.366199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.366455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.366510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.366743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.366806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.367096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.367161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.367421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.367456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.367638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.716 [2024-07-26 14:25:27.367682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.716 qpair failed and we were unable to recover it. 00:31:10.716 [2024-07-26 14:25:27.367919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.367983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.368273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.368337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.368599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.368627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.368848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.368883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.369193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.369256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.369505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.369534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.369692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.369720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.369941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.369975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.370198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.370262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.370523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.370553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.370758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.370786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.370956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.370991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.371225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.371289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.371562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.371591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.371747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.371775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.371988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.372022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.372264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.372328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.372628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.372657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.372867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.372895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.373176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.373210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.373497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.373526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.373758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.373821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.374077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.374106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.374316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.374352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.374608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.374637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.374823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.374887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.375162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.375190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.375378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.375413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.375662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.375725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.376007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.376072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.376361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.376389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.376610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.376639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.376847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.717 [2024-07-26 14:25:27.376911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.717 qpair failed and we were unable to recover it. 00:31:10.717 [2024-07-26 14:25:27.377203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.377266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.377527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.377557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.377769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.377809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.378074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.378138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.378394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.378495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.378741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.378769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.379051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.379086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.379300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.379365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.379673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.379702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.379886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.379915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.380124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.380160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.380405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.380496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.380680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.380709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.380965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.380993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.381154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.381189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.381403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.381485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.381709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.381738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.381911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.381939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.382091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.382126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.382275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.382339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.382590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.382619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.382779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.382809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.383010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.383045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.383284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.383348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.383623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.383652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.383839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.383868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.384058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.384093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.384339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.384403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.384630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.718 [2024-07-26 14:25:27.384658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.718 qpair failed and we were unable to recover it. 00:31:10.718 [2024-07-26 14:25:27.384840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.384869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.385048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.385091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.385276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.385339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.385620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.385650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.385821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.385849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.386034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.386069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.386250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.386314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.386568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.386596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.386759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.386787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.386984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.387019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.387211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.387275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.387526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.387556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.387751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.387787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.388030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.388071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.388305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.388370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.388600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.388629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.388797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.388825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.388994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.389029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.389239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.389302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.389551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.389580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.389764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.389792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.390017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.390052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.390323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.390387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.390634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.390664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.390804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.390832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.391037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.391073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.391307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.391372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.391635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.391664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.391875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.391904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.392142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.392176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.392393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.392473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.392646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.392675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.392845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.719 [2024-07-26 14:25:27.392872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.719 qpair failed and we were unable to recover it. 00:31:10.719 [2024-07-26 14:25:27.393030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.393065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.393234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.393298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.393583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.393612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.393783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.393811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.393994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.394030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.394258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.394321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.394590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.394619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.394799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.394843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.395003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.395033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.395226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.395278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.395510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.395540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.395761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.395789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.395977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.396006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.396234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.396286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.396508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.396537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.396748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.396797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.396989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.397036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.397257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.397309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.397502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.397532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.397735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.397789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.398010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.398055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.398206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.398234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.398411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.398448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.398635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.398680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.398883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.398929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.399150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.399178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.399396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.399424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.399654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.399703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.399887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.399933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.400101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.400149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.400356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.400384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.400603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.400650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.400875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.400923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.720 [2024-07-26 14:25:27.401135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.720 [2024-07-26 14:25:27.401163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.720 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.401358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.401391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.401569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.401616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.401811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.401856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.402033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.402086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.402304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.402332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.402558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.402605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.402772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.402817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.403039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.403091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.403310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.403338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.403552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.403601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.403822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.403868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.404092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.404142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.404326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.404354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.404548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.404583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.404822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.404867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.405045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.405095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.405288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.405316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.405521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.405566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.405750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.405796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.406024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.406075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.406260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.406288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.406502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.406550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.406767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.406814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.407013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.407064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.407278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.407306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.407544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.407579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.407820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.407869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.408077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.408132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.721 qpair failed and we were unable to recover it. 00:31:10.721 [2024-07-26 14:25:27.408266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.721 [2024-07-26 14:25:27.408294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.408496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.408553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.408764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.408810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.409018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.409070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.409248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.409276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.409490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.409520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.409756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.409803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.409996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.410045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.410247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.410276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.410511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.410561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.410775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.410822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.411017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.411065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.411282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.411310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.411543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.411579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.411785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.411832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.412020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.412070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.412235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.412263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.412478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.412506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.412723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.412769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.412996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.413046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.413211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.413263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.413472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.413502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.413663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.413708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.413936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.413983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.414144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.414195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.414357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.414384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.414562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.414609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.414819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.414869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.415092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.415143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.415352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.415379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.415533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.415562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.415736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.415796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.416019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.722 [2024-07-26 14:25:27.416071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.722 qpair failed and we were unable to recover it. 00:31:10.722 [2024-07-26 14:25:27.416260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.416287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.416490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.416540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.416748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.416799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.417004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.417053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.417261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.417289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.417508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.417556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.417733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.417785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.417997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.418045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.418237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.418265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.418484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.418513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.418709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.418766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.418930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.418981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.419178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.419228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.419403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.419437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.419637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.419685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.419840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.419889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.420078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.420129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.420286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.420315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.420490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.420539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.420745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.420801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.421001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.421052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.421276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.421304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.421486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.421551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.421767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.421817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.422004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.422056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.422227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.422255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.422465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.422494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.422689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.422734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.422954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.423002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.423202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.423248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.423454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.423482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.423665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.423710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.423930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.423982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.723 [2024-07-26 14:25:27.424163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.723 [2024-07-26 14:25:27.424208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.723 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.424392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.424425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.424646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.424674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.424866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.424917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.425117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.425164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.425346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.425373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.425580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.425629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.425860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.425911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.426125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.426171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.426351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.426379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.426584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.426631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.426822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.426867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.427082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.427128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.427345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.427372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.427564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.427611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.427852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.427902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.428112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.428158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.428336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.428365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.428569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.428598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.428823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.428872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.429101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.429146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.429361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.429389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.429616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.429645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.429813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.429866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.430084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.430130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.430300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.430329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.430528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.430584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.430785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.430833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.430992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.431044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.431209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.431237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.431372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.431400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.431599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.431645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.431843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.431889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.432044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.432096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.724 qpair failed and we were unable to recover it. 00:31:10.724 [2024-07-26 14:25:27.432289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.724 [2024-07-26 14:25:27.432317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.432504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.432558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.432785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.432834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.433064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.433111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.433292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.433319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.433547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.433584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.433771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.433817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.434039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.434089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.434297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.434325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.434524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.434573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.434793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.434838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.435010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.435059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.435248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.435275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.435482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.435533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.435702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.435748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.435961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.436028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.436254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.436281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.436476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.436523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.436754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.436814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.437028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.437079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.437306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.437334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.437554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.437612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.437832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.437879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.438074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.438126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.438306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.438334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.438525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.438573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.438765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.438811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.439020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.439072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.439260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.439288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.439494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.439542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.439760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.439810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.440034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.440085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.440296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.725 [2024-07-26 14:25:27.440324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.725 qpair failed and we were unable to recover it. 00:31:10.725 [2024-07-26 14:25:27.440516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.440563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.440780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.440827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.441052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.441102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.441295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.441324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.441519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.441570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.441753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.441787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.441991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.442043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.442238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.442265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.442528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.442556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.442742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.442787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.443022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.443075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.443233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.443283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.443534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.443583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.443766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.443812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.444007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.444056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.444231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.444259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.444488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.444522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.444759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.444810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.445034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.445090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.445276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.445304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.445494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.445545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.445849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.445909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.446122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.446176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.446328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.446356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.446605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.446655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.446898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.446946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.447145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.447197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.447347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.447374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.447605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.447652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.447876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.447928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.448112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.448157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.448347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.448374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.448564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.448592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.448768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.726 [2024-07-26 14:25:27.448814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.726 qpair failed and we were unable to recover it. 00:31:10.726 [2024-07-26 14:25:27.449007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.449056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.449272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.449300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.449517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.449567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.449785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.449831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.450014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.450067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.450236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.450264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.450420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.450466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.450653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.450699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.450916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.450968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.451191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.451246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.451404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.451439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.451676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.451724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.451942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.451993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.452218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.452269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.452456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.452486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.452692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.452719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.452888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.452937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.453108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.453158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.453337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.453365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.453573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.453603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.453877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.453927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.454131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.454176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.454409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.454447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.454657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.454685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.454876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.454940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.455138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.455192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.455378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.455406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.455556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.455584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.727 [2024-07-26 14:25:27.455777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.727 [2024-07-26 14:25:27.455834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.727 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.456034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.456086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.456298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.456326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.456514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.456542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.456769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.456821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.457013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.457064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.457256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.457284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.457472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.457520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.457756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.457808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.457990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.458040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.458263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.458290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.458503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.458553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.458774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.458821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.459000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.459046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.459252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.459279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.459436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.459484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.459667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.459713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.459910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.459959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.460140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.460171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.460380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.460409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.460608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.460653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.460856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.460916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.461114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.461165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.461371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.461399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.461642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.461691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.461896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.461948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.462173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.462223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.462400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.462434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.462626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.462678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.462908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.462958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.463162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.463213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.463406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.463440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.463624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.463670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.463889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.728 [2024-07-26 14:25:27.463938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.728 qpair failed and we were unable to recover it. 00:31:10.728 [2024-07-26 14:25:27.464164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.464216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.464441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.464469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.464707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.464765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.464964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.465015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.465233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.465283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.465515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.465544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.465758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.465811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.466000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.466052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.466286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.466336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.466531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.466559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.466799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.466848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.467072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.467124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.467332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.467360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.467552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.467600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.467773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.467821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.468015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.468065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.468280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.468307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.468525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.468569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.468794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.468849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.469073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.469127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.469337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.469365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.469562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.469610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.469791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.469848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.470022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.470074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.470229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.470256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.470477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.470505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.470711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.470769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.470984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.471034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.471264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.471313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.471532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.471580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.471773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.471825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.472018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.472066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.472284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.472312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.729 [2024-07-26 14:25:27.472508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.729 [2024-07-26 14:25:27.472555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.729 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.472789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.472839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.473030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.473088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.473266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.473293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.473521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.473569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.473752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.473807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.474005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.474056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.474202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.474230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.474455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.474484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.474699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.474761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.474984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.475034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.475263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.475311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.475466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.475494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.475710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.475766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.475961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.476010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.476221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.476272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.476481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.476710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.476769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.476996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.477047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.477283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.477332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.477510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.477539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.477741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.477802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.478024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.478084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.478261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.478290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.479072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.479105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.479304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.479332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.479550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.479598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.479798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.479848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.480068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.480114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.480303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.480331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.480555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.480602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.480819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.480862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.481061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.481108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.730 [2024-07-26 14:25:27.481284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.730 [2024-07-26 14:25:27.481312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.730 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.481530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.481579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.481796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.481860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.482045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.482094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.482296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.482324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.482491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.482527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.482745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.482795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.483002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.483049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.483237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.483264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.483403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.483437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.483631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.483677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.483900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.483953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.484147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.484199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.484380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.484408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.484603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.484650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.484833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.484879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.485064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.485120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.485298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.485326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.485492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.485542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.485708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.485753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.485973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.486022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.486248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.486299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.486528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.486576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.486752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.486799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.487034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.487095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.487235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.487263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.487447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.487475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.487636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.487685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.487909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.487961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.488193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.488242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.488447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.488476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.488634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.488681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.488851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.488903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.489102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.731 [2024-07-26 14:25:27.489150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-07-26 14:25:27.489359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.489387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.489539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.489569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.489773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.489834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.490084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.490112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.490286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.490314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.490486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.490536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.490704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.490764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.490997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.491046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.491225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.491253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.491450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.491484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.491715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.491768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.491986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.492038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.492256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.492306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.492490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.492539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.492734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.492785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.492987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.493035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.493221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.493274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.493487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.493536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.493738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.493789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.494009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.494059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.494203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.494231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.494411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.494448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.494638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.494688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.494871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.494928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.495132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.495184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.495376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.495404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.495557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.495604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.495838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.732 [2024-07-26 14:25:27.495891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-07-26 14:25:27.496078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.496128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.496307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.496334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.496542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.496576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.496793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.496841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.497007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.497057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.497263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.497292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.497456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.497485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.497663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.497712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.497898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.497951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.498183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.498229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.498455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.498484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.498679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.498726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.498888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.498941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.499101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.499146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.499350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.499378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.499530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.499577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.499750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.499809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.500003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.500049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.500254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.500281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.500474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.500530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.500763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.500818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.501003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.501049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.501239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.501267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.501452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.501483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.501689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.501734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.501918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.501964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.502187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.502242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.502456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.502484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.502643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.502695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.502882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.502927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.503155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.503205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.503381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.503409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.503600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.503628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-07-26 14:25:27.503777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.733 [2024-07-26 14:25:27.503823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.504023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.504072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.504245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.504272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.504464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.504514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.504737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.504782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.504938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.504989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.505160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.505208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.505358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.505386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.505589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.505636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.505825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.505874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.506073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.506124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.506297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.506325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.506542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.506589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.506789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.506838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.507027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.507075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.507285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.507313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.507497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.507550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.507756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.507805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.508004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.508052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.508238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.508265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.508477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.508506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.508736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.509002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.509051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.509286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.509338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.509540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.509589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.509733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.509784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.509975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.510027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.510242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.510293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.510485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.510536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.510739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.510788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.510988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.511039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.511260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.511288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.511490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.511538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.511736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.734 [2024-07-26 14:25:27.511788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.734 qpair failed and we were unable to recover it. 00:31:10.734 [2024-07-26 14:25:27.511980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.512031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.512214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.512242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.512453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.512482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.512662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.512707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.512926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.512976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.513138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.513190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.513383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.513411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.513600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.513646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.513840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.513892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.514121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.514174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.514350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.514378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.514566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.514613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.514827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.514877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.515042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.515092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.515275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.515303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.515451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.515479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.515692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.515742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.515924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.515972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.516153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.516199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.516373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.516401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.516598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.516644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.516830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.516880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.517038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.517084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.517264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.517292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.517481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.517518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.517746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.517803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.517986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.518031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.518238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.518266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.518450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.518479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.518703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.518747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.518908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.518953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.519152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.519202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.519394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.519421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.519599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.735 [2024-07-26 14:25:27.519645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.735 qpair failed and we were unable to recover it. 00:31:10.735 [2024-07-26 14:25:27.519835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.519881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.520080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.520129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.520312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.520339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.520526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.520555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.520718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.520764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.520988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.521039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.521265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.521315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.521552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.521602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.521830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.521882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.522132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.522180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.522367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.522395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.522581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.522628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.522817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.522863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.523073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.523125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.523319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.523347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.523617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.523664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.523831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.523877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.524092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.524144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.524363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.524391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.524616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.524644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.524859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.524904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.525128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.525179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.525358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.525385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.525585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.525615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.525824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.525871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.526077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.526129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.526338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.526366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.526554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.526583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.526755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.526800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.527012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.527061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.527249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.527277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.527460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.527508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.527724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.527770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.736 [2024-07-26 14:25:27.527967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-07-26 14:25:27.528015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.736 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.528229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.528280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.528484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.528535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.528742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.528788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.528997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.529047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.529254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.529282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.529532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.529578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.529756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.529802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.529991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.530041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.530250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.530278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.530491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.530535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.530744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.530789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.530982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.531033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.531204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.531232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.531443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.531475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.531675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.531721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.531923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.531975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.532207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.532253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.532467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.532496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.532660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.532704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.532926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.532978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.533197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.533248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.533480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.533509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.533671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.533717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.533921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.533971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.534151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.534201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.534394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.534421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.534584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.534612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.737 [2024-07-26 14:25:27.534819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-07-26 14:25:27.534871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.737 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.535094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.535141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.535323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.535351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.535536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.535564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.535730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.535788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.536013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.536063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.536260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.536308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.536528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.536575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.536763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.536812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.537021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.537090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.537274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.537302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.537501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.537548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.537725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.537783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.537941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.537968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.538193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.538223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.538414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.538447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.538652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.538699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.538984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.539039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.539267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.539317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.539548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.539576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.539770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.539826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.540007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.540057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.540263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.540291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.540512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.540558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.540845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.540906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.541138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.541190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.541375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.541403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.541603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.541650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.541879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.541930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.542137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.542191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.542337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.542370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.542582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.542630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.542829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.542879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.738 [2024-07-26 14:25:27.543089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-07-26 14:25:27.543139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.738 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.543308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.543336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.543539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.543584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.543773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.543826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.544049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.544097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.544287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.544315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.544522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.544566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.544749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.544798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.545023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.545074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.545276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.545303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.545475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.545512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.545728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.545779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.545990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.546040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.546247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.546275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.546486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.546533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.546688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.546733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.546971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.547019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.547201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.547229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.547374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.547402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.547592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.547639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.547833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.547885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.548109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.548159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.548375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.548403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.548606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.548653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.548828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.548888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.549128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.549177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.549395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.549424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.549635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.549683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.549880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.549930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.550186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.550230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.550441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.550470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.550635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.550663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.550850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.550899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.551101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.551148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.739 qpair failed and we were unable to recover it. 00:31:10.739 [2024-07-26 14:25:27.551357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.739 [2024-07-26 14:25:27.551385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.551604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.551632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.551875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.551926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.552109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.552158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.552349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.552377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.552548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.552577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.552777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.552825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.553032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.553085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.553272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.553300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.553498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.553554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.553789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.553841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.554062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.554113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.554301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.554329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.554505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.554557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.554752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.554803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.555024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.555072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.555256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.555284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.555474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.555523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.555729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.555772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.555997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.556047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.556221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.556248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.556425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.556468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.556632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.556678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.556869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.556918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.557121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.557166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.557346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.557374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.557598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.557645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.557844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.557894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.558098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.558144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.558367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.558395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.558603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.558650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.558854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.558905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.559101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.740 [2024-07-26 14:25:27.559147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.740 qpair failed and we were unable to recover it. 00:31:10.740 [2024-07-26 14:25:27.559372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.559400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.559599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.559645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.559863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.559913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.560097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.560143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.560352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.560384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.560578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.560625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.560832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.560883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.561096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.561141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.561354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.561382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.561596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.561642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.561870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.561919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.562074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.562119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.562296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.562325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.562537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.562584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.562777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.562829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.562993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.563039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.563256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.563312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.563519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.563566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.563804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.563855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.564048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.564093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.564287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.564315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.564529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.564574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.564764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.564813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.565020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.565065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.565247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.565275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.565517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.565553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.565788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.565838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.566071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.566117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.566332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.566360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.566559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.566606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.566802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.566854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.567024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.567074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.567281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.567310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.567510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.741 [2024-07-26 14:25:27.567560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.741 qpair failed and we were unable to recover it. 00:31:10.741 [2024-07-26 14:25:27.567803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.567852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.568075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.568121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.568327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.568355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.568510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.568558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.568754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.568804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.569008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.569054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.569268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.569296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.569447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.569475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.569683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.569729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.569957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.570003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.570221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.570267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.570491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.570521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.570732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.570794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.570963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.570995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.571217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.571267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.571474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.571502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.571672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.571720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.571910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.571957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.572174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.572221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.572405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.572438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.572627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.572654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.572809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.572853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.573066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.573111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.573262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.573290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.573490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.573524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.573736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.573782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.573984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.574013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.574193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.574239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:10.742 [2024-07-26 14:25:27.574415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.742 [2024-07-26 14:25:27.574458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:10.742 qpair failed and we were unable to recover it. 00:31:11.021 [2024-07-26 14:25:27.574657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.021 [2024-07-26 14:25:27.574703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.021 qpair failed and we were unable to recover it. 00:31:11.021 [2024-07-26 14:25:27.574942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.021 [2024-07-26 14:25:27.574995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.021 qpair failed and we were unable to recover it. 00:31:11.021 [2024-07-26 14:25:27.575196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.021 [2024-07-26 14:25:27.575245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.021 qpair failed and we were unable to recover it. 00:31:11.021 [2024-07-26 14:25:27.575486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.021 [2024-07-26 14:25:27.575521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.021 qpair failed and we were unable to recover it. 00:31:11.021 [2024-07-26 14:25:27.575723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.021 [2024-07-26 14:25:27.575768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.021 qpair failed and we were unable to recover it. 00:31:11.021 [2024-07-26 14:25:27.575997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.021 [2024-07-26 14:25:27.576043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.021 qpair failed and we were unable to recover it. 00:31:11.021 [2024-07-26 14:25:27.576254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.021 [2024-07-26 14:25:27.576299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.021 qpair failed and we were unable to recover it. 00:31:11.021 [2024-07-26 14:25:27.576492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.021 [2024-07-26 14:25:27.576521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.021 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.576726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.576772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.576959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.577006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.577202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.577248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.577465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.577494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.577689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.577735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.577951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.578009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.578226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.578271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.578436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.578471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.578695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.578723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.578923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.578968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.579160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.579207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.579415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.579457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.579631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.579681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.579884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.579933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.580155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.580205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.580408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.580444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.580670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.580719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.580937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.580984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.581176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.581227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.581439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.581467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.581654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.581700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.581911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.581960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.582178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.582228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.582444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.582478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.582668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.582712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.582940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.582988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.583189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.583240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.583416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.583464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.583631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.583685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.583907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.583955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.584181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.584229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.584444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.584473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.022 qpair failed and we were unable to recover it. 00:31:11.022 [2024-07-26 14:25:27.584619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.022 [2024-07-26 14:25:27.584668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.584858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.584907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.585124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.585175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.585389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.585417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.585633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.585661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.585901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.585951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.586191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.586241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.586452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.586482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.586658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.586686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.586912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.586962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.587195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.587246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.587425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.587458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.587633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.587661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.587859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.587911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.588142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.588199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.588403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.588437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.588624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.588652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.588808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.588858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.589058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.589108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.589266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.589316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.589469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.589498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.589705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.589763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.589978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.590026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.590238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.590293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.590497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.590546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.590752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.590801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.591011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.591062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.591280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.591308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.591483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.591518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.591745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.591795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.592015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.592064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.592245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.592273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.592458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.592486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.592690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.023 [2024-07-26 14:25:27.592756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.023 qpair failed and we were unable to recover it. 00:31:11.023 [2024-07-26 14:25:27.592973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.593022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.593243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.593291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.593498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.593545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.593744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.593796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.594029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.594080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.594284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.594312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.594537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.594584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.594774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.594822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.595066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.595116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.595327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.595354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.595550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.595598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.595793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.595843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.596068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.596118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.596273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.596301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.596485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.596520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.596720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.596776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.597002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.597054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.597249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.597277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.597478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.597525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.597748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.597794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.598021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.598069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.598275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.598303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.598488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.598538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.598774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.598825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.599063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.599114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.599301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.599329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.599482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.599516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.599765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.599818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.599993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.600042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.600226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.600254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.600476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.600505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.600731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.600782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.600982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.601033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.024 [2024-07-26 14:25:27.601250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.024 [2024-07-26 14:25:27.601298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.024 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.601510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.601557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.601770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.601818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.602046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.602096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.602311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.602339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.602515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.602564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.602796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.602846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.603046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.603093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.603301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.603329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.603508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.603555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.603727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.603784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.604004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.604053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.604234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.604262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.604441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.604470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.604646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.604674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.604880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.604929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.605146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.605196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.605406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.605441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.605660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.605687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.605911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.605962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.606182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.606232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.606412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.606453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.606673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.606701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.606926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.606975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.607208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.607255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.607470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.607499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.607707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.607734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.607925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.607974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.608204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.608253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.608467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.608495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.608682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.608710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.608928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.608979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.609194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.609247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.609465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.609494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.025 qpair failed and we were unable to recover it. 00:31:11.025 [2024-07-26 14:25:27.609720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.025 [2024-07-26 14:25:27.609748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.609970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.610018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.610215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.610267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.610453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.610485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.610665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.610693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.610892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.610944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.611173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.611225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.611459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.611487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.611666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.611694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.611924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.611972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.612135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.612185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.612372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.612400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.612595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.612623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.612831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.612882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.613078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.613128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.613334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.613362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.613502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.613531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.613757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.613812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.614036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.614085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.614301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.614329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.614538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.614585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.614782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.614838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.615060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.615110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.615291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.615319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.615487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.615522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.615707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.615751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.615973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.616023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.616211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.616257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.616398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.616426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.616643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.026 [2024-07-26 14:25:27.616697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.026 qpair failed and we were unable to recover it. 00:31:11.026 [2024-07-26 14:25:27.616922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.616971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.617160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.617206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.617385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.617412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.617605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.617651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.617880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.617929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.618086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.618131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.618350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.618378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.618568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.618615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.618845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.618896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.619079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.619123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.619338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.619366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.619567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.619596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.619789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.619840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.620024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.620070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.620246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.620279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.620489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.620524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.620728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.620771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.620952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.620997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.621189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.621240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.621450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.621478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.621708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.621764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.621951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.621996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.622182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.622233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.622447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.622479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.622705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.622757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.622980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.623025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.623250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.623300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.623544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.623595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.623834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.623887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.624076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.027 [2024-07-26 14:25:27.624121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.027 qpair failed and we were unable to recover it. 00:31:11.027 [2024-07-26 14:25:27.624326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.624354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.624578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.624629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.624817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.624867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.625086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.625131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.625338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.625366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.625573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.625602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.625829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.625879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.626097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.626143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.626308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.626336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.626504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.626564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.626770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.626827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.627020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.627066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.627278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.627306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.627517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.627575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.627807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.627857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.628064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.628109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.628318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.628346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.628503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.628549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.628744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.628799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.629014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.629059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.629213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.629241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.629451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.629480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.629706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.629774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.629992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.630040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.630265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.630315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.630506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.630554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.630781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.630840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.631017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.631062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.631211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.631239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.631419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.631455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.631682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.631728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.631954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.632001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.632235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.632285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.028 qpair failed and we were unable to recover it. 00:31:11.028 [2024-07-26 14:25:27.632424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.028 [2024-07-26 14:25:27.632479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.632629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.632679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.632842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.632887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.633091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.633142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.633344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.633372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.633550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.633578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.633791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.633837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.634068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.634117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.634294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.634321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.634547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.634583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.634821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.634866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.635049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.635098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.635314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.635342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.635557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.635606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.635840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.635890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.636110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.636161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.636338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.636366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.636569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.636616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.636803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.636849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.637046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.637104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.637287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.637315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.637516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.637562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.637746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.637792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.638014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.638063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.638211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.638452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.638482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.638671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.638717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.638944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.638994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.639207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.639255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.639422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.639455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.639674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.639702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.639926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.639976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.640195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.640246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.640457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.029 [2024-07-26 14:25:27.640485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.029 qpair failed and we were unable to recover it. 00:31:11.029 [2024-07-26 14:25:27.640662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.640689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.640889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.640940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.641159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.641207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.641385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.641413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.641604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.641632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.641855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.641905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.642094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.642145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.642323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.642351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.642534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.642564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.642707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.642796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.643035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.643085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.643291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.643319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.643528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.643579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.643806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.643856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.644075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.644124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.644286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.644313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.644544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.644591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.644788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.644837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.645041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.645092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.645302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.645330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.645510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.645557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.645791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.645840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.646060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.646112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.646292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.646320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.646488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.646546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.646739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.646791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.646997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.647048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.647266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.647293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.647486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.647535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.647717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.647773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.647994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.648045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.648272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.648300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.648484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.030 [2024-07-26 14:25:27.648519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.030 qpair failed and we were unable to recover it. 00:31:11.030 [2024-07-26 14:25:27.648768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.648822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.649047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.649096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.649310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.649338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.649511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.649557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.649750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.649800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.650003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.650054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.650234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.650266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.650461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.650491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.650721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.650773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.650956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.651007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.651239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.651290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.651498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.651545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.651745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.651796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.652014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.652062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.652239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.652267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.652471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.652499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.652726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.652778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.653023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.653071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.653270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.653320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.653537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.653584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.653785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.653837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.654040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.654088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.654306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.654334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.654515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.654562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.654804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.654856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.655081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.655130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.655346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.655374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.655562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.655609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.655804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.655855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.656079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.656128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.656313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.656340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.656533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.656579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.656816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.656869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.031 [2024-07-26 14:25:27.657106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.031 [2024-07-26 14:25:27.657155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.031 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.657368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.657396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.657607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.657635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.657833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.657883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.658054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.658103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.658308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.658336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.658516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.658564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.658792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.658843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.659069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.659118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.659297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.659325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.659544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.659590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.659783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.659835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.660033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.660083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.660307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.660334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.660546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.660597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.660813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.660861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.661086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.661136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.661355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.661383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.661577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.661623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.661816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.661865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.662056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.662104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.662283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.662310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.662518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.662566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.662761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.662813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.663024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.663072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.663244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.663272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.663454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.663482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.663704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.663763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.663992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.032 [2024-07-26 14:25:27.664042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.032 qpair failed and we were unable to recover it. 00:31:11.032 [2024-07-26 14:25:27.664219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.664269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.664475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.664503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.664718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.664769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.664970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.665019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.665216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.665265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.665476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.665504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.665708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.665764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.665953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.666002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.666201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.666251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.666467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.666496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.666645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.666693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.666894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.666942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.667170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.667224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.667453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.667481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.667709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.667763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.667926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.667976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.668177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.668228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.668381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.668409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.668598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.668626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.668779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.668829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.669042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.669092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.669304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.669332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.669543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.669571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.669758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.669808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.670016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.670065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.670255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.670284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.670501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.670531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.670723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.670768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.670958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.671009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.671217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.671244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.671399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.671432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.671606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.671653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.671833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.671883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.033 qpair failed and we were unable to recover it. 00:31:11.033 [2024-07-26 14:25:27.672078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.033 [2024-07-26 14:25:27.672123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.672268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.672296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.672498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.672546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.672706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.672750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.672919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.672964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.673152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.673201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.673420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.673458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.673604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.673652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.673832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.673860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.674050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.674079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.674264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.674292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.674447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.674479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.674639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.674688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.674887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.674943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.675131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.675179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.675336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.675363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.675579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.675627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.675837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.675891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.676077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.676127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.676339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.676367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.676528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.676575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.676771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.676821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.677052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.677099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.677280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.677308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.677452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.677482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.677691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.677755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.677943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.677997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.678200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.678248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.678405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.678443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.678610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.678656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.678889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.678942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.679151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.679201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.679403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.679446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.034 [2024-07-26 14:25:27.679595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.034 [2024-07-26 14:25:27.679642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.034 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.679829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.679879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.680093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.680144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.680291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.680318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.680494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.680529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.680752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.680804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.681004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.681052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.681257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.681284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.681474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.681519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.681675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.681721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.681880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.681930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.682115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.682161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.682365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.682393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.682577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.682625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.682834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.682882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.683068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.683113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.683272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.683308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.683529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.683576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.683783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.683833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.684030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.684076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.684283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.684312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.684496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.684530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.684744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.684794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.684932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.684979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.685183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.685230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.685389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.685419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.685632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.685680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.685905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.685955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.686194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.686247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.686444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.686476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.686632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.686660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.686868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.686915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.687115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.687167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.687378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.687405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.035 [2024-07-26 14:25:27.687574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.035 [2024-07-26 14:25:27.687602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.035 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.687770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.687804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.688034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.688093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.688265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.688293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.688478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.688516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.688742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.688798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.689038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.689090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.689269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.689301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.689500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.689559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.689749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.689795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.690017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.690066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.690251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.690278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.690483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.690533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.690748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.690801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.691028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.691082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.691290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.691318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.691505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.691554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.691772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.691818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.692038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.692088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.692303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.692331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.692510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.692560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.692749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.692796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.693022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.693073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.693281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.693308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.693513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.693561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.693748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.693794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.693982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.694032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.694209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.694237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.694404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.694440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.694588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.694635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.694837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.694887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.695066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.695111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.695309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.695336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.695510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.036 [2024-07-26 14:25:27.695557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.036 qpair failed and we were unable to recover it. 00:31:11.036 [2024-07-26 14:25:27.695761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.695814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.696037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.696088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.696261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.696289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.696494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.696542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.696730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.696785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.696966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.697017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.697198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.697225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.697402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.697436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.697609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.697655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.697824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.697876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.698062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.698121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.698298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.698326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.698561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.698615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.698817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.698868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.699065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.699115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.699313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.699341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.699518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.699565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.699742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.699797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.700022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.700071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.700274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.700302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.700447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.700475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.700664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.700710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.700927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.700978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.701158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.701205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.701392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.701420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.701629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.701683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.701918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.701975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.702210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.037 [2024-07-26 14:25:27.702259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.037 qpair failed and we were unable to recover it. 00:31:11.037 [2024-07-26 14:25:27.702437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.702470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.702614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.702663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.702813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.702864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.703032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.703065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.703282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.703332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.703519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.703567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.703786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.703840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.704049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.704095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.704309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.704337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.704492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.704541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.704744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.704807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.704991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.705036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.705237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.705287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.705502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.705548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.705726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.705772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.705953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.705999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.706183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.706211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.706392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.706419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.706653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.706700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.706881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.706927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.707146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.707194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.707402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.707436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.707628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.707681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.707883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.707929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.708164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.708215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.708402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.708437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.708612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.708657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.708851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.708898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.709125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.709174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.709344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.709372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.709553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.709581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.709791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.709837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.710025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.710086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.038 [2024-07-26 14:25:27.710293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.038 [2024-07-26 14:25:27.710321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.038 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.710505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.710554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.710745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.710792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.710991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.711043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.711279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.711329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.711504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.711555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.711772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.711817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.712016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.712073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.712282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.712310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.712504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.712553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.712717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.712764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.712981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.713029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.713216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.713243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.713389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.713416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.713603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.713649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.713857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.713907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.714135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.714189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.714357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.714386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.714591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.714639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.714869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.714933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.715155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.715221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.716033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.716064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.716292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.716339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.716520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.716549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.716774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.716820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.717050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.717104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.717279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.717307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.717460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.717488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.717677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.717723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.717948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.717999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.718155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.718198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.718411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.718449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.718665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.718712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.718882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.039 [2024-07-26 14:25:27.718933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.039 qpair failed and we were unable to recover it. 00:31:11.039 [2024-07-26 14:25:27.719119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.719169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.719395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.719422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.719639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.719667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.719873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.719925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.720117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.720162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.720372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.720400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.720585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.720613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.720836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.720886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.721095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.721141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.721284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.721312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.721498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.721547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.721791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.721842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.722066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.722114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.722329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.722356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.722542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.722591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.722806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.722861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.723055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.723101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.723278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.723307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.723524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.723568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.723777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.723823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.724014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.724048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.724269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.724297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.724512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.724558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.724808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.724877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.725074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.725121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.725302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.725330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.725546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.725580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.725834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.725892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.726125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.726170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.726382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.726409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.726597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.726645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.726856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.726909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.727126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.727174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.040 [2024-07-26 14:25:27.727392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.040 [2024-07-26 14:25:27.727420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.040 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.727661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.727689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.727886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.727937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.728129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.728175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.728384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.728412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.728643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.728671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.728862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.728913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.729098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.729145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.729362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.729390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.729567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.729614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.729800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.729848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.730070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.730117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.730279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.730307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.730483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.730536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.730760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.730807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.730998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.731044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.731270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.731319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.731512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.731569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.731757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.731807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.731995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.732042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.732247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.732275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.732423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.732472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.732626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.732672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.732860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.732909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.733090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.733137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.733354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.733382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.733582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.733630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.733843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.733889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.734084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.734135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.734342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.734370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.734546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.734594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.734782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.734828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.735041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.735086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.735274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.735302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.041 qpair failed and we were unable to recover it. 00:31:11.041 [2024-07-26 14:25:27.735485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.041 [2024-07-26 14:25:27.735534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.735783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.735837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.736079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.736129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.736412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.736504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.736716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.736766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.737093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.737140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.737413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.737487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.737720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.737757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.737999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.738055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.738265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.738321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.738526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.738566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.738837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.738902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.739173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.739239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.739486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.739526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.739764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.739834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.740073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.740128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.740332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.740367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.740585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.740624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.740912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.740977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.741267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.741315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.741586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.741626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.741850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.741889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.742188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.742253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.742461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c75b00 is same with the state(5) to be set 00:31:11.042 [2024-07-26 14:25:27.742764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.742808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.743017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.743054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.042 [2024-07-26 14:25:27.743239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.042 [2024-07-26 14:25:27.743286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.042 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.743516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.743545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.743766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.743801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.744059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.744088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.744343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.744407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.744708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.744743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.744984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.745019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.745266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.745330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.745632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.745667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.745960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.746024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.746325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.746353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.746525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.746554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.746713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.746742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.746928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.746976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.747149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.747183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.747408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.747467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.747707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.747742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.748000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.748035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.748233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.748267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.748498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.748528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.748755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.748801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.749047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.749112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.749411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.749494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.749738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.749787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.750025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.750060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.750305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.750369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.750667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.750695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.750927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.750962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.751201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.751263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.751570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.751600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.043 [2024-07-26 14:25:27.751831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.043 [2024-07-26 14:25:27.751866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.043 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.752101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.752135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.752330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.752365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.752592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.752622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.752857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.752892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.755600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.755644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.755871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.755908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.756107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.756143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.756332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.756367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.756603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.756633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.756849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.756884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.757161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.757225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.757454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.757509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.757727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.757762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.757965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.757993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.758205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.758241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.758442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.758488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.758707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.758736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.758995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.759059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.759350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.759385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.759671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.759700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.759927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.759962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.760167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.760201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.760456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.760486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.760707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.760772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.761038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.761072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.761337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.761366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.761594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.761623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.761767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.761802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.762030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.762059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.762273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.762308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.762493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.762522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.762710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.762739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.044 [2024-07-26 14:25:27.762967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.044 [2024-07-26 14:25:27.763002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.044 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.763232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.763267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.763462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.763508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.763728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.763800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.764101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.764136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.764313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.764341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.764538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.764568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.764792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.764827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.765067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.765095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.765328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.765363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.765598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.765627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.765775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.765804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.766005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.766069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.766351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.766416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.766691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.766719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.766931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.766967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.767188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.767222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.767445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.767474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.767664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.767742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.768034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.768075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.768297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.768325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.768538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.768568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.768789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.768824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.769044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.769072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.769281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.769316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.769543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.769571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.769759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.769787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.770039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.770104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.770388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.770422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.045 [2024-07-26 14:25:27.770676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.045 [2024-07-26 14:25:27.770705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.045 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.770999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.771064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.771336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.771371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.771617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.771646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.771883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.771947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.772239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.772274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.772526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.772555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.772732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.772767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.772956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.772990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.773182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.773210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.773401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.773441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.773688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.773738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.773987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.774015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.774246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.774310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.774570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.774599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.774777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.774806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.775032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.775097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.775375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.775439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.775670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.775700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.775903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.775951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.776212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.776260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.776543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.776584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.776834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.776881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.777092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.777141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.777414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.777462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.777702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.777750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.778025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.778089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.778349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.778379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.778621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.778649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.778833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.778886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.779115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.779161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.779448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.779509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.779727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.779774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.046 [2024-07-26 14:25:27.780017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.046 [2024-07-26 14:25:27.780056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.046 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.780347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.780395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.780609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.780648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.780852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.780890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.781195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.781273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.781524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.781553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.781767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.781795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.782007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.782065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.782258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.782291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.782491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.782519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.782717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.782778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.783006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.783062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.783295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.783322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.783505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.783558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.783774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.783830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.784040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.784067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.784281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.784315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.784514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.784543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.784761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.784789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.785014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.785067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.785255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.785289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.785475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.785504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.785712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.785774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.785987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.786042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.786250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.786278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.786501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.786529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.786761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.786826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.787001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.787029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.787223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.047 [2024-07-26 14:25:27.787257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.047 qpair failed and we were unable to recover it. 00:31:11.047 [2024-07-26 14:25:27.787501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.787529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.787739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.787767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.788023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.788078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.788268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.788302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.788493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.788521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.788745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.788801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.789004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.789058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.789260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.789287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.789515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.789548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.789754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.789820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.790039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.790066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.790250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.790284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.790500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.790529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.790697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.790723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.790920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.790975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.791198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.791232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.791458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.791485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.791704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.791770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.792002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.792053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.792255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.792283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.792513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.792540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.792721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.792768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.793001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.793028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.793254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.793308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.793543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.793571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.793753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.793780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.048 [2024-07-26 14:25:27.794007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.048 [2024-07-26 14:25:27.794059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.048 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.794284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.794317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.794510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.794537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.794728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.794783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.795006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.795062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.795261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.795289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.795511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.795539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.795747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.795802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.796021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.796048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.796237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.796271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.796487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.796515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.796697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.796725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.796959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.797015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.797242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.797275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.797504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.797533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.797730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.797784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.798021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.798075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.798295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.798323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.798546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.798602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.798838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.798892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.799117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.799144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.799346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.799379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.799587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.799620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.799805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.799833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.800054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.800108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.800302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.800335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.800553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.800582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.800801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.800856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.801096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.801152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.801363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.801391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.801585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.801613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.801827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.801881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.802085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.802113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.802303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.049 [2024-07-26 14:25:27.802336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.049 qpair failed and we were unable to recover it. 00:31:11.049 [2024-07-26 14:25:27.802555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.802583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.802742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.802769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.802998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.803052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.803268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.803300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.803530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.803557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.803775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.803830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.804074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.804129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.804352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.804384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.804624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.804652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.804900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.804953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.805177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.805204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.805368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.805402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.805600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.805627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.805785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.805812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.805993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.806049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.806285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.806319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.806521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.806549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.806752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.806806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.807050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.807102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.807308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.807336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.807524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.807552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.807744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.807805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.808015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.808043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.808238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.808272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.808510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.808538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.808745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.808771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.809009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.809063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.809284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.050 [2024-07-26 14:25:27.809317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.050 qpair failed and we were unable to recover it. 00:31:11.050 [2024-07-26 14:25:27.809504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.809537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.809730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.809787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.809962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.810015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.810205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.810232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.810457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.810502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.810718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.810752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.810992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.811020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.811271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.811326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.811514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.811543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.811752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.811779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.812036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.812090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.812280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.812313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.812539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.812568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.812804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.812857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.813103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.813155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.813341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.813369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.813573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.813602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.813790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.813842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.814044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.814072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.814251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.814285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.814508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.814537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.814748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.814776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.815032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.815085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.815304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.815338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.815561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.815589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.815763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.815823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.816031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.816085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.816302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.816330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.816503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.816565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.816808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.816869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.817085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.817113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.817306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.817339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.817565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.051 [2024-07-26 14:25:27.817594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.051 qpair failed and we were unable to recover it. 00:31:11.051 [2024-07-26 14:25:27.817783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.817811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.818033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.818089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.818291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.818325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.818529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.818558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.818775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.818828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.819043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.819098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.819285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.819312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.819526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.819559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.819788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.819841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.820028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.820055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.820230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.820263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.820458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.820513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.820703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.820730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.820947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.821000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.821201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.821236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.821403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.821444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.821645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.821673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.821914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.821970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.822168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.822195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.822392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.822426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.822683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.822729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.822959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.822987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.823210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.823266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.823517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.823546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.823754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.823782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.824030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.824083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.824313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.824346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.824569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.824598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.824759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.824814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.825026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.825078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.825309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.825337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.825557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.825585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.825816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.052 [2024-07-26 14:25:27.825870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.052 qpair failed and we were unable to recover it. 00:31:11.052 [2024-07-26 14:25:27.826092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.826119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.826312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.826346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.826539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.826567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.826762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.826789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.826965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.827020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.827208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.827242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.827418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.827453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.827601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.827629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.827851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.827906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.828144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.828171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.828356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.828390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.828597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.828625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.828813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.828840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.829079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.829131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.829320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.829353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.829574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.829603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.829824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.829879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.830135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.830193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.830386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.830413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.830592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.830620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.830839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.830892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.831123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.831151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.831300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.831334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.831546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.831575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.831753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.831780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.831976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.832029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.832265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.832319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.832522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.832551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.053 [2024-07-26 14:25:27.832759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.053 [2024-07-26 14:25:27.832815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.053 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.833024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.833077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.833298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.833325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.833556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.833584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.833799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.833851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.834068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.834095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.834290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.834324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.834526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.834554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.834774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.834802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.835044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.835100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.835250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.835283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.835498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.835536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.835725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.835778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.836024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.836277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.836311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.836529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.836557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.836717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.836751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.836956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.836983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.837201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.837235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.837460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.837495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.837696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.837723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.837944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.837998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.838209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.838263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.838468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.838497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.838691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.838745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.838951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.839004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.839227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.839255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.839490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.839524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.839758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.839811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.840011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.840038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.840270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.840323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.840530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.840584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.840793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.054 [2024-07-26 14:25:27.840821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.054 qpair failed and we were unable to recover it. 00:31:11.054 [2024-07-26 14:25:27.841045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.841100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.841295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.841328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.841523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.841551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.841782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.841837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.842076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.842129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.842347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.842375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.842550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.842578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.842788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.842842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.843016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.843044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.843259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.843291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.843516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.843569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.843797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.843823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.844020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.844071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.844260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.844293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.844487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.844514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.844727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.844783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.845027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.845082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.845306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.845334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.845519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.845548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.845705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.845751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.845937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.845969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.846117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.846171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.846367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.846400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.846582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.846609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.846826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.846860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.847061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.847095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.847275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.847309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.847507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.847535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.847732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.847801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.848026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.848054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.848272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.848306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.848502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.848558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.848796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.055 [2024-07-26 14:25:27.848823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.055 qpair failed and we were unable to recover it. 00:31:11.055 [2024-07-26 14:25:27.849047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.849101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.849327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.849361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.849583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.849612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.849830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.849884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.850122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.850177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.850365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.850393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.850607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.850635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.850863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.850921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.851140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.851168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.851357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.851391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.851592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.851619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.851810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.851837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.852029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.852084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.852278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.852310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.852536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.852565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.852787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.852843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.853069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.853124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.853320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.853348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.853535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.853564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.853793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.853848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.854071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.854098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.854293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.854327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.854532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.854560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.854711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.854737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.854927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.854981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.855220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.855273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.855448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.855476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.855706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.855772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.855979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.856032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.856271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.856298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.856501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.856529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.856728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.856795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.857026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.056 [2024-07-26 14:25:27.857053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.056 qpair failed and we were unable to recover it. 00:31:11.056 [2024-07-26 14:25:27.857201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.857234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.857422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.857470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.857704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.857731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.857961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.858017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.858259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.858315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.858518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.858547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.858772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.858827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.859031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.859084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.859281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.859309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.859529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.859584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.859823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.859878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.860070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.860098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.860279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.860313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.860549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.860603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.860822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.860849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.861103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.861156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.861378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.861412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.861664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.861693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.861915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.861970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.862202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.862257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.862438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.862465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.862655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.862702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.862949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.863008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.863212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.863240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.863485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.863559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.863757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.863817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.864053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.864080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.864321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.864376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.864580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.864608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.864817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.864844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.865091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.865145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.865362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.865396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.865598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.057 [2024-07-26 14:25:27.865626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.057 qpair failed and we were unable to recover it. 00:31:11.057 [2024-07-26 14:25:27.865865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.865932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.866168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.866227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.866466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.866494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.866641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.866686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.866929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.866983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.867176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.867203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.867390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.867424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.867661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.867688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.867874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.867902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.868128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.868182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.868413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.868455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.868649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.868676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.868858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.868913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.869126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.869178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.869401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.869443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.869645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.869690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.869925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.869979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.870135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.870162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.870339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.870373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.870572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.870600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.870776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.870803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.871007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.871061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.871295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.871328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.871525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.871553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.871774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.871826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.872028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.872083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.872258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.872286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.872501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.872530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.872766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.872825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.873013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.873041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.873235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.873269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.873474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.873526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.873750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.873778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.058 [2024-07-26 14:25:27.873953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.058 [2024-07-26 14:25:27.874009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.058 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.874236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.874270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.874524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.874552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.874778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.874833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.875076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.875131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.875344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.875377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.875615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.875643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.875884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.875948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.876142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.876177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.876403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.876445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.876672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.876718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.876911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.876938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.877158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.877211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.877395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.877448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.877688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.877715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.877941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.877996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.878226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.878281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.878498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.878526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.878738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.878793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.879040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.879097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.879329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.879358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.879539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.879568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.879799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.879854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.880050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.880078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.880284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.880317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.880506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.880534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.880749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.880777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.881027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.059 [2024-07-26 14:25:27.881081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.059 qpair failed and we were unable to recover it. 00:31:11.059 [2024-07-26 14:25:27.881282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.881316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.881481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.881508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.881728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.881784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.882017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.882071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.882290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.882317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.882506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.882535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.882754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.882810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.883009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.883037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.883214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.883248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.883471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.883519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.883721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.883749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.883941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.883997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.884209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.884263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.884478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.884507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.884666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.884710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.884863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.884917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.885146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.885174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.885384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.885418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.885661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.885706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.885915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.885942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.886163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.886223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.886398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.886450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.886648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.886675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.886881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.886913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.887130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.887187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.887420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.887454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.887660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.887704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.887892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.887925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.060 [2024-07-26 14:25:27.888135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.060 [2024-07-26 14:25:27.888163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.060 qpair failed and we were unable to recover it. 00:31:11.342 [2024-07-26 14:25:27.888312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.342 [2024-07-26 14:25:27.888345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.342 qpair failed and we were unable to recover it. 00:31:11.342 [2024-07-26 14:25:27.888560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.342 [2024-07-26 14:25:27.888587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.342 qpair failed and we were unable to recover it. 00:31:11.342 [2024-07-26 14:25:27.888801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.342 [2024-07-26 14:25:27.888836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.342 qpair failed and we were unable to recover it. 00:31:11.342 [2024-07-26 14:25:27.889061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.342 [2024-07-26 14:25:27.889088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.342 qpair failed and we were unable to recover it. 00:31:11.342 [2024-07-26 14:25:27.889299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.342 [2024-07-26 14:25:27.889327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.342 qpair failed and we were unable to recover it. 00:31:11.342 [2024-07-26 14:25:27.889540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.342 [2024-07-26 14:25:27.889568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.342 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.889783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.889812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.889960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.889988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.890198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.890226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.890439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.890468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.890650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.890676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.890875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.890903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.891068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.891095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.891302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.891330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.891538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.891566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.891773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.891801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.892017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.892044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.892225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.892252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.892462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.892490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.892668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.892695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.892877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.892904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.893112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.893139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.893351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.893378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.893561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.893589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.893767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.893795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.893978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.894005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.894189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.894217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.894424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.894468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.894617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.894645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.894854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.894881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.895087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.895115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.895292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.895324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.895533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.895561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.895773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.895800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.895979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.896006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.896225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.896253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.896471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.896500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.896703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.896731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.896939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.343 [2024-07-26 14:25:27.896967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.343 qpair failed and we were unable to recover it. 00:31:11.343 [2024-07-26 14:25:27.897150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.897177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.897394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.897421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.897645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.897672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.897848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.897874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.898052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.898078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.898259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.898291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.898521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.898549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.898766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.898794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.898958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.898985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.899170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.899197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.899349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.899376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.899554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.899583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.899786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.899813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.900021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.900049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.900223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.900250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.900410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.900444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.900654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.900680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.900830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.900858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.901035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.901062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.901255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.901283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.901480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.901508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.901690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.901717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.901856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.901883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.902063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.902095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.902321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.902353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.902569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.902597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.902757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.902784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.903003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.903031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.903187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.903215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.903426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.903459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.903637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.903664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.903880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.903907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.904087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.904118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.344 qpair failed and we were unable to recover it. 00:31:11.344 [2024-07-26 14:25:27.904338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.344 [2024-07-26 14:25:27.904365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.904527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.904555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.904761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.904788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.904971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.904999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.905207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.905234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.905445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.905489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.905663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.905690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.905864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.905891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.906068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.906101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.906317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.906349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.906546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.906574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.906764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.906791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.906979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.907005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.907186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.907213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.907393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.907425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.907646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.907673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.907881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.907910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.908122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.908149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.908334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.908362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.908568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.908596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.908776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.908804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.908994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.909020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.909201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.909229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.909444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.909490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.909672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.909699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.909838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.909865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.910081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.910108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.910291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.910318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.910503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.910532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.910708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.910736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.910929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.910956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.911141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.911168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.911325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.911351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.345 qpair failed and we were unable to recover it. 00:31:11.345 [2024-07-26 14:25:27.911578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.345 [2024-07-26 14:25:27.911606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.911777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.911804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.911981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.912008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.912217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.912245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.912427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.912459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.912664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.912691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.912868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.912899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.913090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.913118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.913301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.913335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.913544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.913573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.913741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.913768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.913983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.914010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.914195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.914222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.914369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.914398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.914556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.914585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.914761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.914789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.915001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.915028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.915207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.915233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.915450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.915478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.915686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.915714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.915874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.915902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.916041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.916070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.916249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.916276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.916476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.916505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.916686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.916714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.916894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.916922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.917068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.917095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.917276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.917304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.917483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.917510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.917701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.917729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.917924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.917952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.918130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.918158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.918363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.918390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.346 [2024-07-26 14:25:27.918589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.346 [2024-07-26 14:25:27.918617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.346 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.918807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.918834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.918975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.919003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.919197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.919225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.919400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.919433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.919585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.919612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.919834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.919861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.920037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.920065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.920207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.920235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.920442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.920471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.920627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.920654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.920865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.920892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.921087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.921115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.921305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.921340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.921512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.921541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.921689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.921715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.921904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.921931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.922138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.922165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.922341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.922368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.922547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.922576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.922755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.922783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.922966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.922992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.923140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.923166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.923376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.923410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.923621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.923649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.923864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.923891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.924068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.924096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.924285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.924313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.924511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.924539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.924722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.924749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.347 [2024-07-26 14:25:27.924919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.347 [2024-07-26 14:25:27.924946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.347 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.925138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.925166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.925314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.925347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.925545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.925573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.925743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.925770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.925904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.925932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.926133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.926161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.926296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.926323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.926484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.926512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.926670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.926699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.926896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.926932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.927134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.927161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.927379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.927407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.927587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.927614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.927810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.927837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.928023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.928051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.928268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.928295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.928489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.928518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.928705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.928732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.928938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.928967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.929188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.929240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.929439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.929492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.929646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.929685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.929861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.929892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.930052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.930080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.930281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.930310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.930529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.930558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.930723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.930750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.930910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.930937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.931113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.931141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.348 [2024-07-26 14:25:27.931321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.348 [2024-07-26 14:25:27.931350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.348 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.931549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.931576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.931714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.931741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.931880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.931908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.932127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.932155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.932332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.932365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.932591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.932618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.932836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.932863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.933024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.933085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.933307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.933340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.933558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.933585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.933727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.933755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.933924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.933952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.934117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.934144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.934326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.934352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.934575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.934603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.934807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.934835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.935051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.935078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.935279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.935308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.935511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.935539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.935735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.935773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.935972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.936010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.936220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.936259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.936506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.936545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.936736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.936777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.936980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.937009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.937176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.937209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.937402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.937444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.937626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.937663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.937897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.937935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.938182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.938221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.938451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.938483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.938633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.938662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.349 [2024-07-26 14:25:27.938857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.349 [2024-07-26 14:25:27.938890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.349 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.939110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.939150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.939408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.939481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.939699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.939738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.939940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.939988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.940144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.940173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.940367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.940395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.940572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.940602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.940805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.940842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.941036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.941074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.941232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.941269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.941441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.941482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.941678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.941708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.941927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.941956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.942118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.942152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.942403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.942452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.942641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.942689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.942918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.942956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.943201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.943241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.943395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.943423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.943577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.943604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.943848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.943887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.944133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.944173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.944377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.944425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.944629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.944660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.944838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.944865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.945051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.945078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.945293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.945333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.945549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.945589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.945831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.945870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.946100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.946140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.946393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.946452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.946651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.946691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.350 qpair failed and we were unable to recover it. 00:31:11.350 [2024-07-26 14:25:27.946925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.350 [2024-07-26 14:25:27.946955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.947123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.947150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.947354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.947383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.947556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.947598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.947851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.947889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.948151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.948190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.948442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.948476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.948648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.948676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.948871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.948900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.949111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.949150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.949359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.949398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.949590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.949630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.949845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.949886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.950124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.950162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.950404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.950460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.950656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.950695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.950916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.950955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.951146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.951184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.951403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.951452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.951629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.951669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.951887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.951917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.952106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.952135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.952286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.952325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.952545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.952583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.952775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.952813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.953025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.953064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.953281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.953312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.953509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.953538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.953683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.953712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.953932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.953971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.954206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.954245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.954502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.954541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.954730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.954770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.351 [2024-07-26 14:25:27.955003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.351 [2024-07-26 14:25:27.955042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.351 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.955220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.955266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.955493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.955532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.955747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.955777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.955975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.956003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.956200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.956238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.956461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.956508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.956671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.956710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.956949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.956988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.957230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.957269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.957458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.957498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.957679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.957718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.957923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.957963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.958137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.958167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.958376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.958410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.958584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.958613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.958835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.958875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.959108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.959146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.959384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.959423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.959599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.959630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.959787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.959815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.959990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.960018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.960249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.960289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.960507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.960547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.960771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.960810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.960985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.961025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.961243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.961273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.961495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.961524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.961675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.961702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.961911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.961949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.962125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.962163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.962372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.962410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.962592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.962630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.962828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.352 [2024-07-26 14:25:27.962858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.352 qpair failed and we were unable to recover it. 00:31:11.352 [2024-07-26 14:25:27.963038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.963066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.963259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.963298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.963524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.963564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.963732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.963772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.964036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.964099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.964329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.964377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.964589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.964628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.964817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.964864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.965085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.965125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.965306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.965337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.965529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.965557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.965706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.965733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.965923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.965962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.966195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.966235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.966436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.966475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.966686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.966725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.966933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.966972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.967194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.967237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.967414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.967460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.967630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.967670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.967908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.967947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.968169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.968209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.968475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.968515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.968696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.968736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.968934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.968974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.969205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.969245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.969458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.969498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.969663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.969702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.353 qpair failed and we were unable to recover it. 00:31:11.353 [2024-07-26 14:25:27.969936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.353 [2024-07-26 14:25:27.969977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.970169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.970198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.970372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.970399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.970597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.970634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.970873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.970912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.971154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.971193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.971425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.971500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.971671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.971700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.971942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.971995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.972190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.972247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.972489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.972529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.972717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.972779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.973086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.973175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.973479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.973518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.973693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.973733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.973980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.974043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.974307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.974338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.974512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.974542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.974694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.974722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.974932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.974993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.975297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.975374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.975580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.975620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.975885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.975934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.976211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.976275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.976512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.976553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.976732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.976761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.977014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.977069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.977293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.977349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.977553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.977592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.977870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.977928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.978210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.978273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.978489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.978520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.978682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.978729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.354 [2024-07-26 14:25:27.978925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.354 [2024-07-26 14:25:27.978979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.354 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.979184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.979234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.979417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.979455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.979604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.979631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.979770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.979816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.980026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.980071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.980268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.980319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.980523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.980569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.980745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.980791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.981015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.981066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.981278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.981307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.981496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.981547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.981751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.981796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.982011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.982063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.982245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.982273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.982458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.982510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.982694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.982740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.982906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.982957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.983141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.983195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.983348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.983376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.983551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.983600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.983815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.983861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.984054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.984099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.984312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.984349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.984590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.984641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.984886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.984950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.985139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.985189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.985377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.985404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.985603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.985647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.985841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.985889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.986077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.986122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.986263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.986299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.986444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.986478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.986646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.355 [2024-07-26 14:25:27.986704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.355 qpair failed and we were unable to recover it. 00:31:11.355 [2024-07-26 14:25:27.986905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.986950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.987141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.987187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.987366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.987394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.987574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.987622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.987832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.987886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.988116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.988167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.988360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.988392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.988594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.988643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.988870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.988916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.989093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.989139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.989347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.989375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.989538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.989586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.989766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.989819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.990025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.990074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.990277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.990304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.990521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.990568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.990778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.990823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.991023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.991070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.991254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.991282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.991500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.991529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.991689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.991724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.991908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.991956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.992148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.992195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.992384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.992412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.992602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.992650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.992865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.992920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.993143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.993187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.993409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.993444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.993625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.993678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.993863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.993906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.994100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.994147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.994333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.994361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.994545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.994574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.994759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.356 [2024-07-26 14:25:27.994827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.356 qpair failed and we were unable to recover it. 00:31:11.356 [2024-07-26 14:25:27.995013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.995064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.995262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.995290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.995536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.995572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.995770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.995828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.996015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.996060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.996274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.996301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.996542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.996577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.996805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.996852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.997046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.997091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.997239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.997266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.997448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.997488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.997648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.997682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.997883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.997934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.998160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.998206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.998382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.998410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.998591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.998637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.998881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.998927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.999137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.999181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.999397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.999425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.999609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.999637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:27.999809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:27.999857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.000041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.000085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.000264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.000292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.000453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.000481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.000627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.000674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.000841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.000874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.001109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.001158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.001368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.001396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.001603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.001651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.001870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.001917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.002136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.002182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.002365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.002393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.002575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.002623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.357 qpair failed and we were unable to recover it. 00:31:11.357 [2024-07-26 14:25:28.002813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.357 [2024-07-26 14:25:28.002862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.003032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.003076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.003281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.003309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.003509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.003556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.003726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.003772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.003922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.003968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.004123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.004169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.004377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.004405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.004565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.004593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.004797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.004824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.005022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.005049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.005231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.005259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.005438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.005467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.005645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.005672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.005867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.005916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.006122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.006167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.006353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.006380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.006539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.006567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.006749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.006795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.006990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.007036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.007212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.007267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.007435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.007473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.007617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.007644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.007814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.007861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.008056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.008102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.008310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.008337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.008530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.008559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.008744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.008795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.009028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.358 [2024-07-26 14:25:28.009073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.358 qpair failed and we were unable to recover it. 00:31:11.358 [2024-07-26 14:25:28.009256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.009284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.009448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.009501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.009646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.009693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.009892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.009938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.010114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.010159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.010340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.010372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.010559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.010594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.010835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.010906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.011069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.011114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.011295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.011322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.011485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.011533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.011675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.011720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.011957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.012001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.012175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.012203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.012380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.012408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.012577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.012626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.012832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.012881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.013072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.013119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.013296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.013324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.013529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.013577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.013788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.013822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.014034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.014078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.014244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.014271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.014460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.014488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.014646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.014694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.014891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.014944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.015120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.015165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.015328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.015362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.015542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.015589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.015783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.015829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.016062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.016110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.016292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.016319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.016511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.016562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.359 qpair failed and we were unable to recover it. 00:31:11.359 [2024-07-26 14:25:28.016743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.359 [2024-07-26 14:25:28.016804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.016983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.017032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.017200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.017228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.017392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.017420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.017588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.017634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.017839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.017888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.018038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.018086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.018260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.018288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.018442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.018470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.018625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.018686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.018918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.018968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.019191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.019236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.019416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.019451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.019638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.019698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.019916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.019963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.020161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.020210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.020390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.020418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.020598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.020647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.020852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.020901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.021128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.021179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.021325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.021353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.021543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.021572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.021769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.021819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.022015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.022060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.022239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.022266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.022406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.022439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.022600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.022655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.022858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.022908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.023102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.023150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.023304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.023340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.023554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.023600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.023849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.023908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.024090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.024136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.024316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.360 [2024-07-26 14:25:28.024343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.360 qpair failed and we were unable to recover it. 00:31:11.360 [2024-07-26 14:25:28.024569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.024616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.024826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.024872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.025035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.025087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.025247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.025274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.025426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.025480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.025634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.025682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.025852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.025905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.026070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.026117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.026328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.026356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.026557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.026607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.026815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.026860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.027055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.027106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.027289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.027317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.027523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.027570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.027733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.027783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.027959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.028008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.028215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.028243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.028473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.028502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.028671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.028724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.028907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.028954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.029142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.029190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.029402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.029437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.029601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.029647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.029847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.029896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.030074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.030124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.030268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.030297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.030475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.030504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.030664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.030716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.030945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.030991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.031196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.031244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.031415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.031449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.031601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.031648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.031845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.031894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.361 [2024-07-26 14:25:28.032086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.361 [2024-07-26 14:25:28.032139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.361 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.032325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.032352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.032569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.032616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.032806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.032855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.033053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.033099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.033281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.033310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.033511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.033559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.033719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.033764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.033991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.034041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.034220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.034248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.034422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.034455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.034607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.034655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.034898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.034947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.035179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.035223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.035477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.035506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.035664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.035709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.035915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.035961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.036176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.036226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.036571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.036600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.036756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.036804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.037052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.037101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.037412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.037491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.037630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.037658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.037826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.037882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.038170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.038229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.038519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.038547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.038687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.038715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.038937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.038991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.039203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.039248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.039396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.039424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.039591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.039619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.039842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.039887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.040226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.040272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.040529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.040558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.362 qpair failed and we were unable to recover it. 00:31:11.362 [2024-07-26 14:25:28.040693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.362 [2024-07-26 14:25:28.040738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.040903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.040965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.041224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.041273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.041530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.041575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.041737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.041764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.042080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.042109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.042419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.042454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.042607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.042634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.042834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.042881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.043092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.043137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.043395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.043423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.043617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.043645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.043943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.044004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.044261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.044309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.044548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.044576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.044761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.044807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.045040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.045093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.045409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.045444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.045594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.045622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.045815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.045866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.046039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.046095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.046277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.046305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.046533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.046580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.046786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.046834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.047014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.047059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.047284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.047312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.047556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.047603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.047749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.047795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.048075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.363 [2024-07-26 14:25:28.048103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.363 qpair failed and we were unable to recover it. 00:31:11.363 [2024-07-26 14:25:28.048352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.048380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.048568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.048615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.048876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.048931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.049111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.049160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.049347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.049375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.049562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.049591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.049763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.049820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.050015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.050061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.050371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.050398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.050565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.050594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.050816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.050862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.051198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.051248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.051463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.051492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.051640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.051675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.051919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.051969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.052252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.052313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.052561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.052589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.052840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.052890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.053094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.053144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.053492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.053520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.053676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.053722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.054011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.054064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.054350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.054398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.054573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.054601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.054847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.054896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.055076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.055120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.055379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.055407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.055577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.055605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.055809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.055856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.056063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.056114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.056290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.056318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.056529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.056576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.056806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.056856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.057142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.364 [2024-07-26 14:25:28.057206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.364 qpair failed and we were unable to recover it. 00:31:11.364 [2024-07-26 14:25:28.057446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.057475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.057689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.057757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.057957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.058008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.058215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.058260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.058512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.058541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.058724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.058783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.058927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.058972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.059266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.059317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.059562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.059591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.059762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.059806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.060023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.060073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.060251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.060279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.060517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.060563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.060727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.060788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.060960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.060988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.061176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.061204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.061459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.061489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.061662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.061708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.061990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.062035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.062320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.062375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.062544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.062573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.062760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.062805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.063003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.063052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.063276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.063325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.063515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.063562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.063790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.063844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.064085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.064135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.064453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.064488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.064642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.064687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.064934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.064982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.065276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.065323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.065538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.065567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.065821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.365 [2024-07-26 14:25:28.065880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.365 qpair failed and we were unable to recover it. 00:31:11.365 [2024-07-26 14:25:28.066070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.066126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.066400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.066434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.066603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.066631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.066855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.066904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.067132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.067184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.067467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.067495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.067718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.067763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.068002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.068051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.068234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.068282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.068481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.068509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.068668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.068725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.068984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.069035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.069264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.069310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.069611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.069660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.069979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.070029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.070236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.070282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.070555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.070617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.070829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.070878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.071140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.071188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.071395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.071434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.071630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.071658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.071898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.071944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.072221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.072289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.072551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.072579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.072763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.072808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.073045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.073096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.073374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.073401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.073609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.073637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.073913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.073963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.074276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.074329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.074571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.074599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.074926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.074977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.075303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.075349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.075596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.366 [2024-07-26 14:25:28.075625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.366 qpair failed and we were unable to recover it. 00:31:11.366 [2024-07-26 14:25:28.075908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.075959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.076230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.076281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.076529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.076557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.076753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.076800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.077057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.077112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.077382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.077410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.077620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.077648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.077871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.077921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.078143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.078188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.078442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.078471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.078636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.078663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.078916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.078967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.079211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.079261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.079477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.079506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.079714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.079742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.079985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.080036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.080363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.080414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.080797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.080840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.081188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.081240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.081519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.081549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.081947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.081990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.082294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.082342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.082559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.082588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.082878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.082924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.083252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.083303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.083543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.083572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.083770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.083817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.084027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.084055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.084389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.084451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.084833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.084885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.085196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.085243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.085515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.085545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.085693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.367 [2024-07-26 14:25:28.085721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.367 qpair failed and we were unable to recover it. 00:31:11.367 [2024-07-26 14:25:28.085975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.086031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.086262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.086315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.086567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.086596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.086923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.086974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.087242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.087302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.087555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.087584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.087830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.087878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.088135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.088187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.088424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.088461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.088726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.088754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.089047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.089098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.089373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.089420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.089682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.089711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.089929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.089977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.090187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.090233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.090448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.090477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.090682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.090710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.091038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.091095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.091385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.091442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.091662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.091706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.092056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.092113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.092462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.092517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.092918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.092962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.093223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.093271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.093495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.093525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.093728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.093757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.093933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.093979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.094269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.094322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.094531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.094560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.094732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.094778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.368 qpair failed and we were unable to recover it. 00:31:11.368 [2024-07-26 14:25:28.095045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.368 [2024-07-26 14:25:28.095105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.095401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.095436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.095687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.095715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.095936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.095986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.096203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.096252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.096517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.096545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.096905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.096955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.097259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.097309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.097527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.097555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.097802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.097852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.098030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.098081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.098268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.098314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.098511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.098539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.098705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.098757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.099026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.099073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.099337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.099365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.099591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.099619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.099796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.099847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.100136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.100198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.100460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.100489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.100741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.100788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.101069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.101121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.101450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.101479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.101746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.101774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.102049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.102103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.102402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.102471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.102720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.102748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.102920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.102976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.103269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.103324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.103571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.103600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.103892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.103942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.104230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.104288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.369 qpair failed and we were unable to recover it. 00:31:11.369 [2024-07-26 14:25:28.104482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.369 [2024-07-26 14:25:28.104511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.104741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.104798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.105083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.105138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.105383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.105411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.105727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.105786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.106073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.106129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.106402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.106460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.106716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.106744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.107000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.107056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.107349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.107392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.107594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.107623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.107793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.107844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.108065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.108115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.108471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.108499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.108726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.108754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.108943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.108988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.109200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.109248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.109494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.109523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.109785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.109813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.110033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.110083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.110425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.110483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.110749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.110777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.111102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.111150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.111488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.111515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.111716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.111743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.111991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.112019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.112328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.112391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.112751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.112833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.113178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.113229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.113515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.113544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.113793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.113821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.114053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.114101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.114305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.114357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.370 [2024-07-26 14:25:28.114766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.370 [2024-07-26 14:25:28.114810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.370 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.115072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.115121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.115416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.115485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.115736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.115765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.116032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.116083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.116368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.116419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.116683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.116712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.116996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.117049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.117367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.117417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.117751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.117795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.118003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.118054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.118336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.118392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.118678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.118708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.119105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.119163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.119474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.119503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.119751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.119779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.120034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.120090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.120356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.120414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.120814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.120857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.121207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.121259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.121476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.121506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.121672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.121700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.121895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.121946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.122146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.122198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.122446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.122474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.122711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.122739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.122954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.123004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.123191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.123237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.123563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.123591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.123843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.123891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.124160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.124208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.124483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.124512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.124793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.124822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.125073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.125122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.371 [2024-07-26 14:25:28.125460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.371 [2024-07-26 14:25:28.125529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.371 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.125801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.125829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.126031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.126077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.126347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.126399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.126664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.126692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.126913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.126959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.127266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.127325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.127579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.127608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.127848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.127892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.128207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.128258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.128581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.128610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.128894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.128942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.129233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.129284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.129614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.129648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.129940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.129986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.130281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.130331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.130651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.130680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.130963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.131012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.131302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.131355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.131628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.131657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.131835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.131881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.132196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.132247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.132552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.132580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.132862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.132908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.133226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.133275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.133547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.133576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.133872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.133927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.134229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.134281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.134506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.134534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.134734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.134780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.135003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.135051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.135349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.135400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.135690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.135719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.136007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.372 [2024-07-26 14:25:28.136059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.372 qpair failed and we were unable to recover it. 00:31:11.372 [2024-07-26 14:25:28.136377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.136435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.136669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.136696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.136922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.136974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.137251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.137302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.137540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.137569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.137801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.137850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.138125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.138186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.138493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.138521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.138748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.138797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.139127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.139176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.139419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.139454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.139842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.139885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.140142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.140190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.140424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.140461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.140844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.140888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.141195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.141247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.141566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.141613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.141868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.141895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.142129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.142179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.142411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.142449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.142739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.142768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.142998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.143047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.143236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.143280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.143609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.143638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.143908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.143966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.144281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.144326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.144559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.144587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.144864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.144916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.145198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.145242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.145492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.145521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.145757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.145786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.146052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.146097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.146339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.146388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.373 [2024-07-26 14:25:28.146710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.373 [2024-07-26 14:25:28.146754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.373 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.147035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.147085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.147405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.147461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.147656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.147685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.147895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.147941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.148142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.148190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.148503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.148531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.148846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.148889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.149135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.149202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.149384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.149412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.149641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.149670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.149891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.149937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.150243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.150287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.150544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.150573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.150844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.150891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.151098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.151144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.151357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.151385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.151580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.151609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.151848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.151893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.152163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.152210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.152439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.152467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.152607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.152634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.152843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.152886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.153191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.153235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.153506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.153535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.153749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.153777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.154030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.154077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.154396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.154456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.374 [2024-07-26 14:25:28.154746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.374 [2024-07-26 14:25:28.154775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.374 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.155062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.155112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.155363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.155409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.155700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.155729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.155909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.155954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.156179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.156223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.156449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.156478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.156826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.156884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.157190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.157238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.157474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.157504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.157690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.157719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.157909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.157955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.158185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.158230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.158453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.158488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.158691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.158719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.158899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.158944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.159180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.159225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.159469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.159497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.159777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.159806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.160021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.160067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.160372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.160417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.160700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.160730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.160952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.160996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.161310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.161354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.161708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.161769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.162019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.162067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.162379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.162426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.162729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.162759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.163020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.163075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.163400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.163455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.163699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.163726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.163938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.163989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.164313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.164363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.164643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.375 [2024-07-26 14:25:28.164672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.375 qpair failed and we were unable to recover it. 00:31:11.375 [2024-07-26 14:25:28.164837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.164886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.165125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.165178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.165359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.165387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.165586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.165614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.165892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.165954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.166188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.166234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.166484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.166517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.166791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.166820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.167094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.167141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.167324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.167351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.167535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.167563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.167849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.167895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.168098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.168147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.168331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.168359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.168692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.168748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.169050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.169096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.169306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.169357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.169736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.169780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.170086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.170136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.170397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.170426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.170722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.170751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.171022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.171081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.171304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.171354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.171497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.171526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.171777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.171836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.172074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.172123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.172335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.172363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.172682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.172733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.173010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.173074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.173338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.173384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.173670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.173699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.173872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.173925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.174101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.174148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.376 qpair failed and we were unable to recover it. 00:31:11.376 [2024-07-26 14:25:28.174328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.376 [2024-07-26 14:25:28.174361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.174542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.174571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.174806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.174851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.175067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.175095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.175372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.175400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.175668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.175696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.175956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.176004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.176303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.176359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.176578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.176607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.176820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.176867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.177152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.177207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.177473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.177502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.177828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.177880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.178183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.178230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.178492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.178520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.178898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.178942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.179226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.179286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.179562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.179591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.179843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.179895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.180163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.180211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.180354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.180382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.180615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.180643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.180940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.180991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.181251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.181299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.181630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.181659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.181973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.182024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.182245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.182290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.182537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.182566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.182877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.182929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.183163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.183196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.183481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.183510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.183794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.183855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.184105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.184150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.184402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.377 [2024-07-26 14:25:28.184438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.377 qpair failed and we were unable to recover it. 00:31:11.377 [2024-07-26 14:25:28.184624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.184651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.184895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.184945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.185271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.185316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.185558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.185586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.185918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.185963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.186293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.186338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.186607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.186635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.186852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.186907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.187190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.187242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.187475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.187503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.187694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.187722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.188000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.188046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.188372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.188421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.188738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.188800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.189043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.189092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.189329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.189392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.189730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.189775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.190120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.190182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.190454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.190483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.190813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.190878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.191147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.191198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.191393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.191422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.191787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.191846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.192198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.192249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.192462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.192491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.192719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.192748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.192958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.193004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.193281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.193326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.193571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.193599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.193800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.193859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.194143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.194188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.194445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.194474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.194849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.378 [2024-07-26 14:25:28.194913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.378 qpair failed and we were unable to recover it. 00:31:11.378 [2024-07-26 14:25:28.195234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.195283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.195598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.195633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.195955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.196004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.196341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.196389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.196660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.196688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.196980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.197023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.197221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.197255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.197578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.197606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.197862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.197919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.198148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.198204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.198499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.198527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.198771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.198820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.199103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.199149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.199335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.199363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.199554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.199582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.199804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.199850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.200058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.200103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.200436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.200465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.200849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.200892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.201139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.201192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.201410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.201447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.201701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.201729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.201979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.202025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.202252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.202299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.202508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.202537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.202780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.202826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.203053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.203102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.203354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.203405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.379 [2024-07-26 14:25:28.203642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.379 [2024-07-26 14:25:28.203676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.379 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.203878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.203924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.204185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.204232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.204456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.204484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.204830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.204901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.205195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.205248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.205445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.205474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.205731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.205758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.206075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.206120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.206365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.206410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.206602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.206630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.206941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.206987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.207263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.207311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.207576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.207605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.207960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.207993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.208348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.208402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.208659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.208687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.208908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.208953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.209236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.209268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.209533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.209562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.209787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.209837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.210100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.210144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.210482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.210510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.210811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.210858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.380 [2024-07-26 14:25:28.211114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.380 [2024-07-26 14:25:28.211158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.380 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.211474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.211502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.211750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.211778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.212003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.212037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.212365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.212411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.212707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.212758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.213006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.213043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.213310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.213345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.213587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.213617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.213843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.213877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-07-26 14:25:28.214075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.654 [2024-07-26 14:25:28.214109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.214328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.214363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.214543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.214573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.214768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.214818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.215086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.215119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.215381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.215440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.215727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.215761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.216012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.216041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.216298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.216333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.216609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.216638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.216847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.216875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.217037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.217072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.217266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.217301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.217528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.217557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.217769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.217805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.218140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.218204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.218526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.218555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.218816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.218851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.219074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.219108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.219312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.219347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.219573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.219606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.219875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.219938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.220279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.220344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.220674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.220720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.220982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.221016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.221325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.221390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.221715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.221769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.222100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.222164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.222501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.222530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.222743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.222778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.223044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.223079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.223321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.223384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.223670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.223715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-07-26 14:25:28.224031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.655 [2024-07-26 14:25:28.224094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.224414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.224495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.224757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.224792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.225058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.225093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.225380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.225461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.225734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.225795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.226133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.226197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.226540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.226569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.226843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.226877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.227136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.227170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.227399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.227490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.227722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.227756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.227952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.227987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.228202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.228237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.228483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.228512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.228750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.228785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.229008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.229037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.229213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.229249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.229488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.229517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.229727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.229755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.229922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.229957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.230214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.230279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.230595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.230627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.230879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.230946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.231276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.231339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.231662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.231691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.231990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.232024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.232287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.232330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.232622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.232651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.232895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.232930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.233270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.233334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.233671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.233700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.233981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.234015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-07-26 14:25:28.234282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.656 [2024-07-26 14:25:28.234316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.234568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.234597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.234834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.234894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.235186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.235249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.235605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.235635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.235869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.235904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.236109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.236144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.236374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.236402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.236648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.236699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.237026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.237090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.237408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.237443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.237744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.237778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.237993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.238029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.238227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.238255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.238456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.238491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.238748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.238811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.239087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.239115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.239394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.239463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.239773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.239837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.240145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.240173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.240452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.240488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.240828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.240892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.241201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.241229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.241552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.241588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.241808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.241871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.242155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.242183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.242412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.242496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.242770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.242834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.243133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.243161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.243424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.243495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.243695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.243724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.243931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.243959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.244226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.244261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.657 [2024-07-26 14:25:28.244530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.657 [2024-07-26 14:25:28.244595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.657 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.244921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.244954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.245289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.245360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.245690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.245738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.246067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.246095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.246331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.246366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.246581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.246609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.246782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.246810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.246995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.247029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.247271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.247334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.247678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.247728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.248037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.248071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.248334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.248397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.248680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.248708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.248973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.249007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.249309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.249372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.249704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.249732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.250050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.250084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.250410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.250500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.250724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.250753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.251064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.251118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.251453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.251525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.251740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.251768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.252091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.252159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.252494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.252559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.252851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.252879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.253095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.253130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.253319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.253381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.253692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.253720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.254015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.254049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.254382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.254457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.254760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.254788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.255111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.255145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.255480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.658 [2024-07-26 14:25:28.255546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.658 qpair failed and we were unable to recover it. 00:31:11.658 [2024-07-26 14:25:28.255877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.255906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.256211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.256245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.256544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.256609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.256924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.256952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.257232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.257266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.257592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.257656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.257977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.258005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.258308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.258381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.258659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.258687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.258934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.258961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.259145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.259180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.259385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.259461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.259714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.259742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.260000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.260034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.260206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.260269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.260556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.260585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.260822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.260856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.261139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.261202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.261505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.261533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.261822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.261857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.262106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.262170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.262508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.262537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.262802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.262836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.263066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.263129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.263409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.263447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.263665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.263715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.264040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.264103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.659 qpair failed and we were unable to recover it. 00:31:11.659 [2024-07-26 14:25:28.264392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.659 [2024-07-26 14:25:28.264419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.264656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.264705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.265035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.265098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.265419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.265456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.265655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.265703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.265950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.266011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.266340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.266393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.266747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.266802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.267141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.267203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.267467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.267496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.267690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.267725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.268025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.268088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.268342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.268370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.268547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.268575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.268800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.268863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.269153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.269181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.269472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.269516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.269747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.269809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.270062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.270090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.270281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.270315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.270542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.270594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.270867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.270895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.271137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.271171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.271400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.271485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.271714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.271742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.272080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.272135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.272423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.272502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.272820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.272848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.273166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.273228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.273554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.273619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.273875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.273903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.274114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.274148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.274397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.274488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.660 [2024-07-26 14:25:28.274731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.660 [2024-07-26 14:25:28.274759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.660 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.274952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.274987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.275182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.275243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.275523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.275552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.275754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.275789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.276113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.276177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.276500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.276528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.276845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.276880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.277162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.277224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.277523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.277552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.277846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.277881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.278173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.278235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.278527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.278555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.278773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.278808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.279124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.279187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.279499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.279528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.279710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.279745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.279961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.280022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.280330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.280392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.280705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.280732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.280956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.281018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.281283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.281311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.281493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.281528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.281734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.281797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.282054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.282083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.282291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.282326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.282618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.282663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.282985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.283058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.283411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.283508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.283769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.283832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.284061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.284089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.284319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.284382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.284716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.284744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.285039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.661 [2024-07-26 14:25:28.285067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.661 qpair failed and we were unable to recover it. 00:31:11.661 [2024-07-26 14:25:28.285283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.285318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.285534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.285599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.285908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.285936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.286192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.286226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.286450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.286515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.286769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.286836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.287120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.287154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.287411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.287503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.287687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.287716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.287938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.287971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.288232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.288295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.288624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.288676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.289002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.289035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.289337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.289400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.289727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.289802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.290136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.290205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.290495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.290542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.290815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.290842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.291107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.291141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.291389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.291483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.291759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.291816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.292133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.292168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.292482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.292546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.292864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.292892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.293226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.293295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.293602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.293666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.293963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.293992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.294256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.294290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.294555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.294601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.294869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.294896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.295073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.295107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.295371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.295447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.662 [2024-07-26 14:25:28.295684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.662 [2024-07-26 14:25:28.295712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.662 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.295931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.295971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.296202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.296266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.296550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.296579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.296842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.296897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.297230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.297292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.297597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.297626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.297880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.297915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.298146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.298210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.298530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.298558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.298814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.298848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.299105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.299168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.299482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.299511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.299854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.299927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.300239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.300301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.300653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.300682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.300984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.301019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.301325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.301388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.301676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.301704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.301906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.301940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.302169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.302231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.302549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.302578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.302804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.302838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.303042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.303105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.303374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.303452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.303672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.303700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.304012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.304075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.304397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.304454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.304768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.304833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.305151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.305214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.305512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.305541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.305738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.305772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.306022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.306084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.663 [2024-07-26 14:25:28.306406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.663 [2024-07-26 14:25:28.306442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.663 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.306757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.306792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.307052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.307116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.307450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.307479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.307731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.307766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.308057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.308120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.308447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.308476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.308823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.308881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.309207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.309282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.309617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.309665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.309949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.309983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.310170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.310233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.310514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.310543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.310732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.310766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.310948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.311010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.311304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.311332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.311535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.311571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.311853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.311915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.312226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.312254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.312599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.312662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.312983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.313046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.313333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.313361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.313589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.313617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.313820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.313883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.314137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.314165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.314346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.314379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.314610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.314639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.314822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.314850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.315059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.315093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.315291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.315354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.315657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.315686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.315867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.315902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.316140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.316203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.316478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.664 [2024-07-26 14:25:28.316507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.664 qpair failed and we were unable to recover it. 00:31:11.664 [2024-07-26 14:25:28.316722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.316757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.317016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.317080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.317368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.317445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.317675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.317703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.317929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.317992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.318275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.318303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.318512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.318541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.318755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.318819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.319106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.319134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.319347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.319381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.319568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.319596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.319804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.319832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.320072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.320107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.320341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.320403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.320685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.320717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.320925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.320960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.321192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.321254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.321515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.321544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.321778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.321813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.322069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.322131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.322418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.322453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.322630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.322675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.322915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.322978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.323237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.323265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.323480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.323529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.323770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.323833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.324126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.324154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.324392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.665 [2024-07-26 14:25:28.324470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.665 qpair failed and we were unable to recover it. 00:31:11.665 [2024-07-26 14:25:28.324760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.324824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.325138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.325166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.325504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.325533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.325712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.325789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.326108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.326135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.326453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.326507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.326802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.326864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.327203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.327255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.327521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.327556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.327783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.327845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.328165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.328193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.328492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.328527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.328865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.328927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.329266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.329322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.329609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.329637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.329864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.329926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.330255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.330313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.330663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.330692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.330990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.331053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.331336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.331364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.331597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.331626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.331847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.331910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.332223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.332251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.332583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.332619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.332922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.332986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.333275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.333302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.333486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.333528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.333789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.333852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.334103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.334131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.334310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.334373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.334723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.334783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.335067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.335095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.335289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.335323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.666 [2024-07-26 14:25:28.335528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.666 [2024-07-26 14:25:28.335556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.666 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.335694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.335722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.335937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.335971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.336252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.336314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.336641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.336690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.336930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.336965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.337210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.337272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.337572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.337600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.337793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.337827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.338064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.338127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.338443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.338472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.338709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.338743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.339039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.339102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.339389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.339417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.339618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.339646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.339872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.339934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.340239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.340267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.340515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.340550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.340881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.340944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.341261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.341289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.341645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.341721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.341986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.342051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.342339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.342400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.342705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.342752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.343050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.343114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.343404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.343438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.343617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.343645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.343899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.343962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.344269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.344297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.344578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.344613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.344903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.344965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.345281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.345309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.345649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.345678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.345961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.667 [2024-07-26 14:25:28.346034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.667 qpair failed and we were unable to recover it. 00:31:11.667 [2024-07-26 14:25:28.346312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.346340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.346516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.346551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.346745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.346808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.347115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.347143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.347483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.347519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.347801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.347863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.348181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.348209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.348513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.348549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.348866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.348928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.349246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.349273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.349589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.349651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.349926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.349989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.350292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.350320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.350626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.350655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.350871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.350934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.351188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.351216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.351458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.351508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.351758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.351786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.352046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.352074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.352283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.352346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.352651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.352679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.352962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.352990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.353308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.353375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.353721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.353784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.354072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.354100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.354299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.354334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.354527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.354560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.354769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.354798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.355009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.355043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.355281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.355343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.355676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.355705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.355886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.355921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.356171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.668 [2024-07-26 14:25:28.356234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.668 qpair failed and we were unable to recover it. 00:31:11.668 [2024-07-26 14:25:28.356515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.356544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.356799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.356834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.357096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.357158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.357441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.357470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.357709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.357744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.357994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.358056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.358394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.358421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.358653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.358682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.358952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.359015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.359347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.359392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.359744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.359799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.360110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.360173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.360505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.360557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.360853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.360887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.361105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.361169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.361414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.361448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.361655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.361703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.361985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.362047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.362342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.362370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.362563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.362592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.362799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.362861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.363176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.363204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.363488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.363517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.363657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.363685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.363966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.363994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.364216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.364251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.364509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.364573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.364864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.364891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.365128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.365162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.365456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.365520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.365854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.365910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.366210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.366244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.366514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.366559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.669 qpair failed and we were unable to recover it. 00:31:11.669 [2024-07-26 14:25:28.366769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.669 [2024-07-26 14:25:28.366802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.367023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.367057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.367362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.367424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.367734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.367799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.368124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.368158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.368455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.368529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.368726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.368754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.368950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.368984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.369212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.369274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.369566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.369595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.369810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.369844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.370045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.370108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.370442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.370470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.370696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.370730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.371014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.371077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.371363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.371391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.371593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.371623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.371843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.371905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.372225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.372253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.372550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.372586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.372808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.372870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.373192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.373253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.373545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.373581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.373839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.373900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.374194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.374221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.374411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.374453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.374710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.374773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.375096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.375124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.670 qpair failed and we were unable to recover it. 00:31:11.670 [2024-07-26 14:25:28.375448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.670 [2024-07-26 14:25:28.375524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.375746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.375778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.375940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.375968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.376188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.376222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.376506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.376535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.376712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.376740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.376961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.376995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.377257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.377320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.377658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.377686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.378025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.378095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.378411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.378491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.378812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.378875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.379194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.379234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.379574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.379619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.379881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.379908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.380118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.380152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.380378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.380453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.380715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.380742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.381049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.381083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.381442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.381517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.381736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.381764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.382022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.382056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.382255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.382318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.382648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.382677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.382983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.383017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.383322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.383385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.383731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.383760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.384108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.384142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.384473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.384525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.384779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.384850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.385179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.385213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.385553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.385617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.385944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.386008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.386339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.671 [2024-07-26 14:25:28.386395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.671 qpair failed and we were unable to recover it. 00:31:11.671 [2024-07-26 14:25:28.386755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.386818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.387104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.387132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.387329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.387363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.387568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.387596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.387813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.387840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.388152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.388187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.388530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.388594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.388923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.388982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.389306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.389377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.389714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.389783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.390102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.390129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.390479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.390508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.390738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.390801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.391123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.391150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.391457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.391521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.391692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.391740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.392063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.392091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.392319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.392354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.392592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.392632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.392904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.392933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.393200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.393234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.393471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.393506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.393728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.393756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.394021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.394075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.394322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.394379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.394605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.394634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.394827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.394861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.395065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.395120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.395314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.395341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.395588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.395642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.395988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.396055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.396398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.396475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.396696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.396742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.672 [2024-07-26 14:25:28.397020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.672 [2024-07-26 14:25:28.397084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.672 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.397397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.397424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.397722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.397757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.398056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.398120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.398445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.398489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.398746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.398812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.399134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.399195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.399479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.399508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.399714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.399749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.400013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.400075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.400351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.400379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.400682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.400711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.400952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.401016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.401337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.401366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.401656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.401684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.401936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.401998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.402324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.402386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.402702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.402730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.402929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.402991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.403316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.403344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.403648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.403677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.403883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.403946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.404259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.404287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.404586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.404621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.404906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.404969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.405283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.405317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.405666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.405695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.405942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.406005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.406324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.406351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.406689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.406751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.407063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.407126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.407443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.407471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.407716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.407751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.673 [2024-07-26 14:25:28.408066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.673 [2024-07-26 14:25:28.408129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.673 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.408457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.408486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.408732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.408766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.409108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.409171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.409501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.409546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.409818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.409889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.410196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.410259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.410542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.410571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.410876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.410938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.411255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.411317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.411647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.411676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.412011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.412082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.412395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.412492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.412763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.412820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.413157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.413219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.413480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.413516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.413715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.413743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.413935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.413969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.414178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.414241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.414581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.414629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.414852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.414887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.415173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.415234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.415485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.415514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.415689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.415723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.416024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.416086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.416388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.416416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.416606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.416635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.416845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.416907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.417193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.417221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.417451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.417496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.417697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.417760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.418085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.418113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.418371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.418412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.418738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.674 [2024-07-26 14:25:28.418804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.674 qpair failed and we were unable to recover it. 00:31:11.674 [2024-07-26 14:25:28.419128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.419157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.419490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.419542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.419788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.419850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.420154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.420182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.420491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.420525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.420841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.420904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.421226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.421286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.421578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.421613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.421851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.421913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.422233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.422261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.422544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.422579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.422812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.422874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.423156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.423185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.423394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.423437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.423730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.423793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.424085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.424113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.424360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.424423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.424782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.424846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.425153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.425182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.425518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.425554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.425805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.425869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.426152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.426181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.426354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.426418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.426710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.426738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.426945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.426972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.675 qpair failed and we were unable to recover it. 00:31:11.675 [2024-07-26 14:25:28.427217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.675 [2024-07-26 14:25:28.427252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.427507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.427552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.427829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.427856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.428111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.428146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.428402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.428480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.428717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.428746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.429094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.429164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.429482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.429547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.429868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.429896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.430202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.430236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.430469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.430536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.430723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.430752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.430944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.430979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.431259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.431331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.431661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.431690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.432005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.432073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.432383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.432478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.432755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.432824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.433115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.433149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.433463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.433531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.433734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.433763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.433954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.433988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.434237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.434300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.434615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.434645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.434996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.435030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.435281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.435345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.435666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.435695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.435921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.435956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.436245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.436308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.436633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.436662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.437003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.676 [2024-07-26 14:25:28.437073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.676 qpair failed and we were unable to recover it. 00:31:11.676 [2024-07-26 14:25:28.437411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.437492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.437738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.437766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.437995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.438030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.438245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.438307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.438607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.438636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.438911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.438982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.439310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.439373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.439627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.439655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.439844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.439878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.440188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.440253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.440536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.440565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.440805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.440840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.441109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.441173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.441500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.441528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.441745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.441779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.442032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.442094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.442392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.442420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.442754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.442817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.443100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.443162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.443486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.443533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.443762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.443796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.444072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.444134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.444418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.444468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.444733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.444789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.445073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.445136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.445455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.445485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.445795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.445829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.446080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.446143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.446468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.446496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.446781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.446816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.447097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.447159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.447451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.447480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.447713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.447747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.677 qpair failed and we were unable to recover it. 00:31:11.677 [2024-07-26 14:25:28.448002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.677 [2024-07-26 14:25:28.448065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.448414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.448489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.448671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.448699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.448922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.448986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.449272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.449300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.449496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.449532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.449777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.449842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.450168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.450231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.450523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.450558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.450800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.450862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.451149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.451177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.451440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.451476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.451729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.451791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.452121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.452186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.452485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.452513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.452789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.452852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.453142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.453171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.453375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.453410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.453683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.453746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.454070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.454135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.454416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.454461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.454792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.454854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.455174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.455201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.455552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.455608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.455850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.455913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.456192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.456219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.456484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.456519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.456772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.456835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.457155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.678 [2024-07-26 14:25:28.457183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.678 qpair failed and we were unable to recover it. 00:31:11.678 [2024-07-26 14:25:28.457481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.457522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.457777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.457839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.458104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.458132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.458300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.458363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.458715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.458778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.459088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.459117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.459456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.459527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.459761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.459823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.460147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.460175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.460470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.460504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.460700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.460762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.461078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.461127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.461456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.461503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.461735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.461798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.462100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.462128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.462330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.462392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.462723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.462787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.463071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.463099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.463329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.463391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.463725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.463794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.464074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.464102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.464327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.464391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.464713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.464769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.465073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.465101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.465424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.465495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.465754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.465817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.466136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.466164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.466515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.466589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.466858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.466921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.467236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.467263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.467477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.467513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.467731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.467794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.468075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.468103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.679 qpair failed and we were unable to recover it. 00:31:11.679 [2024-07-26 14:25:28.468277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.679 [2024-07-26 14:25:28.468312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.468527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.468591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.468880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.468908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.469175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.469210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.469524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.469588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.469900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.469928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.470240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.470274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.470580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.470655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.470960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.470988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.471307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.471372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.471715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.471774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.472050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.472078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.472274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.472308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.472561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.472627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.472962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.473022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.473341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.473375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.473739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.473842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.474189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.474244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.474536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.474573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.474915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.474979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.475276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.475304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.475612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.475648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.475957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.476020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.476318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.476346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.476506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.476542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.476788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.476851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.477161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.477190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.477518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.477577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.477902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.477965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.478275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.478303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.478614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.478643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.478900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.478963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.479265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.479293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.680 [2024-07-26 14:25:28.479613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.680 [2024-07-26 14:25:28.479641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.680 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.479889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.479954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.480291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.480343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.480670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.480699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.481036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.481099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.481411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.481449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.481678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.481714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.481993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.482057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.482382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.482411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.482703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.482749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.483000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.483064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.483381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.483410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.483667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.483695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.483906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.483970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.484289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.484322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.484664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.484693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.484980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.485044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.485333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.485361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.485649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.485729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.486018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.486082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.486377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.486405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.681 [2024-07-26 14:25:28.486666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.681 [2024-07-26 14:25:28.486694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.681 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.486987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.487051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.487378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.487407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.487724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.487774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.487982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.488045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.488379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.488408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.488631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.488660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.488889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.488953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.489267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.489296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.489592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.489621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.489812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.489876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.490199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.490228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.490491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.490526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.490824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.490888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.491196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.491224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.491565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.491600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.491848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.491912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.492215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.492243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.492517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.492553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.492804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.492868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.493185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.493214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.493480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.493516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.493753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.493817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.494148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.494197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.494510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.494546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.494850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.494913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.495191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.495219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.495513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.495549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.495721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.495790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.496068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.496096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.496338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.496403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.496666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.496694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.497019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.497082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.682 [2024-07-26 14:25:28.497415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.682 [2024-07-26 14:25:28.497505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.682 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.497763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.497827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.498075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.498104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.498296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.498330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.498585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.498651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.498987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.499043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.499387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.499468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.499717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.499748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.500084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.500113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.500402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.500453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.500661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.500706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.500997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.501024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.501228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.501262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.501484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.501521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.501752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.501780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.502065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.502131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.502414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.502511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.502769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.502841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.503159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.503194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.503497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.503526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.503707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.503736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.503947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.503982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.504223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.504286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.504605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.504634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.504942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.504978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.505261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.505324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.505656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.505685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.505978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.506014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.506334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.506398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.506733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.506762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.507037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.507071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.507337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.507400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.507712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.507740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.683 [2024-07-26 14:25:28.508084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.683 [2024-07-26 14:25:28.508144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.683 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.508457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.508525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.508757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.509092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.509127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.509467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.509533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.509852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.509881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.510167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.510201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.510512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.510586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.510885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.510913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.511163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.511198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.511512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.511557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.511802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.511831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.512048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.512083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.512389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.512467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.512758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.512814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.513150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.513185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.513461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.513527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.513714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.513743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.513938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.513972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.514178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.514242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.514481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.514510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.514696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.514730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.515026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.515089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.515406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.515441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.515777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.515843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.516168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.516231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.516568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.516626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.516946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.516980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.517277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.517341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.517675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.684 [2024-07-26 14:25:28.517722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.684 qpair failed and we were unable to recover it. 00:31:11.684 [2024-07-26 14:25:28.518048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.518118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.518337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.518401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.518691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.518719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.518909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.518943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.519183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.519256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.519567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.519596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.519809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.519844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.520143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.520206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.520520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.520550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.520895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.520950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.521281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.521314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.521538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.521567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.521815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.521850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.522140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.522204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.522517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.522546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.522764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.522798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.523008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.523071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.523449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.523503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.523763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.523796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.524025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.524091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.524421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.524499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.524849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.524882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.525158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.525222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.525546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.525575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.525818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.525852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.685 [2024-07-26 14:25:28.526078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.685 [2024-07-26 14:25:28.526112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.685 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.526456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.962 [2024-07-26 14:25:28.526485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.962 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.526687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.962 [2024-07-26 14:25:28.526722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.962 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.526952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.962 [2024-07-26 14:25:28.527015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.962 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.527337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.962 [2024-07-26 14:25:28.527365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.962 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.527712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.962 [2024-07-26 14:25:28.527757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.962 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.527982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.962 [2024-07-26 14:25:28.528015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.962 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.528248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.962 [2024-07-26 14:25:28.528277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.962 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.528509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.962 [2024-07-26 14:25:28.528539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.962 qpair failed and we were unable to recover it. 00:31:11.962 [2024-07-26 14:25:28.528733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.528767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.528986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.529014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.529227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.529262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.529447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.529481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.529701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.529729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.529949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.529982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.530223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.530256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.530515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.530544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.530779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.530814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.531034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.531096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.531386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.531419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.531667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.531712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.531965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.532028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.532263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.532292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.532507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.532543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.532781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.532845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.533145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.533173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.533453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.533489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.533808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.533871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.534150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.534178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.534412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.534455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.534753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.534816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.535125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.535153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.535491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.535526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.535784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.535848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.536109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.536137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.536368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.536444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.536682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.536710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.537002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.537030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.537269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.537303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.537564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.537599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.537834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.963 [2024-07-26 14:25:28.537862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.963 qpair failed and we were unable to recover it. 00:31:11.963 [2024-07-26 14:25:28.538087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.538122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.538350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.538413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.538739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.538821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.539108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.539142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.539389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.539467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.539751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.539827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.540177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.540245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.540547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.540576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.540783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.540812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.541066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.541100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.541426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.541513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.541800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.541864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.542175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.542209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.542542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.542608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.542937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.542996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.543320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.543355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.543705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.543760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.544092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.544120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.544383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.544427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.544685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.544713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.545026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.545054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.545268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.545303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.545541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.545605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.545919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.545947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.546246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.546281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.546621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.546656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.546891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.546920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.547109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.547145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.547381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.547458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.547753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.547782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.548061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.548095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.548369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.548457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.548770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.964 [2024-07-26 14:25:28.548849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.964 qpair failed and we were unable to recover it. 00:31:11.964 [2024-07-26 14:25:28.549178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.549229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.549535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.549601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.549928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.549956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.550178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.550212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.550495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.550560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.550865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.550894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.551217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.551251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.551589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.551634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.551926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.551955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.552266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.552300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.552584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.552613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.552873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.552950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.553273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.553308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.553658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.553687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.553824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.553852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.554013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.554048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.554308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.554375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.554699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.554727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.555090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.555157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.555455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.555522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.555801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.555871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.556141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.556175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.556404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.556486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.556770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.556825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.557149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.557184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.557495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.557570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.557887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.557915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.558236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.558303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.558600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.558664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.558986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.559014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.559294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.559329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.559667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.559696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.560017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.965 [2024-07-26 14:25:28.560071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.965 qpair failed and we were unable to recover it. 00:31:11.965 [2024-07-26 14:25:28.560402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.560487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.560786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.560849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.561165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.561226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.561541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.561577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.561875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.561938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.562365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.562707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.562755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.563041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.563104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.563441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.563490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.563704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.563739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.564037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.564101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.564503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.564551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.564773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.564809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.565106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.565168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.565594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.565659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.565969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.566005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.566332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.566396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.566680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.566708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.566932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.566967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.567177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.567242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.567483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.567513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.567692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.567727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.567944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.568008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.568290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.568355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.568644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.568673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.568868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.568932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.569285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.569354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.569633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.569662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.569931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.569995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.570395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.570505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.570714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.570762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.571033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.966 [2024-07-26 14:25:28.571096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.966 qpair failed and we were unable to recover it. 00:31:11.966 [2024-07-26 14:25:28.571448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.571488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.571681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.571726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.572032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.572096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.572401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.572445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.572618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.572646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.572901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.572965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.573253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.573282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.573464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.573516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.573660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.573688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.573960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.573988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.574226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.574265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.574488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.574517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.574683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.574711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.574933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.574968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.575204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.575258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.575463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.575493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.575708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.575743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.575983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.576047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.576311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.576339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.576577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.576606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.576813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.576877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.577138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.577166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.577377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.577412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.577623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.577651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.577810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.577838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.578024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.578060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.578260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.578324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.578573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.578603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.578807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.578841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.579119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.579183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.967 [2024-07-26 14:25:28.579441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.967 [2024-07-26 14:25:28.579471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.967 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.579692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.579727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.579933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.579997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.580289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.580319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.580528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.580557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.580776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.580840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.581120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.581150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.581341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.581376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.581615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.581644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.581835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.581863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.582071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.582112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.582324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.582388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.582639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.582668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.582871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.582906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.583141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.583205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.583487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.583517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.583725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.583760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.583962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.584026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.584270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.584298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.584495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.584525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.584677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.584743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.585029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.585057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.585274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.585339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.585607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.585636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.585820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.585848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.586034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.586068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.586316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.586380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.586652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.586681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.586883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.586919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.587126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.587191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.587485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.968 [2024-07-26 14:25:28.587515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.968 qpair failed and we were unable to recover it. 00:31:11.968 [2024-07-26 14:25:28.587695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.587730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.587932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.587996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.588297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.588325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.588517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.588547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.588735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.588800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.589091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.589119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.589339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.589375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.589602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.589632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.589840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.589869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.590092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.590127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.590381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.590458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.590714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.590742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.591024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.591059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.591279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.591343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.591634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.591664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.591822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.591857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.592062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.592127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.592410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.592449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.592618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.592647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.592876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.592951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.593243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.593271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.593496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.593525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.593710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.593785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.594069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.594097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.594313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.594348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.594576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.594605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.594757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.594786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.594928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.594963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.595169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.595234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.595492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.595521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.595708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.595753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.595965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.596028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.969 qpair failed and we were unable to recover it. 00:31:11.969 [2024-07-26 14:25:28.596318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.969 [2024-07-26 14:25:28.596347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.596539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.596568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.596761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.596825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.597090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.597119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.597280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.597315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.597555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.597584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.597765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.597793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.597978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.598014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.598260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.598324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.598622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.598650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.598886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.598921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.599206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.599270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.599548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.599577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.599757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.599792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.600004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.600069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.600351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.600380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.600639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.600668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.600865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.600930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.601156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.601185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.601380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.601415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.601677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.601738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.602019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.602047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.602242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.602276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.602512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.602541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.602759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.602788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.603068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.603102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.603304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.603368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.603605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.603641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.603852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.603887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.604183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.604246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.604505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.604534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.604727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.604762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.605003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.605067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.605322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.970 [2024-07-26 14:25:28.605350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.970 qpair failed and we were unable to recover it. 00:31:11.970 [2024-07-26 14:25:28.605543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.605572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.605771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.605836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.606092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.606120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.606331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.606366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.606553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.606582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.606793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.606822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.607105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.607140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.607406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.607526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.607736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.607765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.608031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.608066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.608293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.608356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.608657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.608686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.608893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.608929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.609120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.609183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.609476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.609505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.609690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.609725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.609955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.610019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.610315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.610344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.610578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.610607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.610835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.610901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.611191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.611221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.611410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.611455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.611691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.611736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.612022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.612051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.612265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.612300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.612534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.612563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.612761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.612791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.613019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.613054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.613304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.613368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.613651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.613680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.613934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.613970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.614196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.614260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.614536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.971 [2024-07-26 14:25:28.614565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.971 qpair failed and we were unable to recover it. 00:31:11.971 [2024-07-26 14:25:28.614772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.614812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.615075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.615139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.615394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.615422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.615653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.615700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.615963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.616030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.616303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.616332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.616524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.616589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.616844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.616908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.617151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.617179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.617469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.617504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.617702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.617738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.617960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.617988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.618237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.618290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.618555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.618593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.618892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.618939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.619206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.619260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.619488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.619523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.619730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.619764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.619950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.620004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.620234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.620295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.620521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.620549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.620775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.620836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.621101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.621154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.621445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.621473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.621746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.621817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.622027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.622079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.622367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.622423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.622698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.622729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.622951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.623005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.623226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.623254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.623504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.623539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.623778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.623831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.624075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.624102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.972 [2024-07-26 14:25:28.624318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.972 [2024-07-26 14:25:28.624372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.972 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.624688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.624742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.625003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.625061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.625261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.625316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.625602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.625655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.625936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.625966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.626224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.626279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.626489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.626554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.626837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.626864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.627088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.627144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.627339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.627373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.627629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.627658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.627887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.627940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.628229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.628290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.628548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.628577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.628779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.628833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.629076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.629141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.629393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.629421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.629607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.629635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.629930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.629989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.630212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.630239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.630443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.630492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.630731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.630765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.630973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.631001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.631228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.631285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.631491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.631520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.631734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.631761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.631954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.632008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.632223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.632278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.973 [2024-07-26 14:25:28.632502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.973 [2024-07-26 14:25:28.632530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.973 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.632696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.632752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.632960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.633015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.633190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.633218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.633409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.633478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.633697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.633725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.633862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.633889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.634042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.634075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.634266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.634301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.634504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.634533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.634765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.634831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.635067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.635122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.635336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.635364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.635639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.635703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.635933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.635988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.636196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.636223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.636498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.636532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.636781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.636835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.637112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.637143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.637369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.637402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.637631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.637659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.637897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.637925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.638179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.638231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.638468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.638503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.638735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.638763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.639009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.639062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.639335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.639392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.639646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.639675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.639907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.639975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.640220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.640276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.640514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.640543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.640733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.640790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.641024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.641078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.641298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.974 [2024-07-26 14:25:28.641325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.974 qpair failed and we were unable to recover it. 00:31:11.974 [2024-07-26 14:25:28.641574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.641628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.641917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.641978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.642198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.642226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.642447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.642481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.642754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.642788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.643007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.643035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.643259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.643318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.643552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.643587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.643820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.643847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.644057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.644111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.644337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.644370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.644692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.644738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.645017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.645073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.645323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.645376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.645619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.645647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.645901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.645954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.646178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.646232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.646477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.646505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.646731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.646785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.647027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.647083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.647319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.647346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.647544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.647579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.647791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.647846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.648056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.648084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.648276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.648316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.648520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.648575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.648796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.648824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.649041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.649094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.649318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.649351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.649569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.649597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.649793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.649849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.650093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.650157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.650379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.650412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.975 [2024-07-26 14:25:28.650646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.975 [2024-07-26 14:25:28.650695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.975 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.650938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.650991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.651222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.651250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.651483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.651512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.651730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.651789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.651999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.652027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.652239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.652295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.652504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.652532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.652744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.652772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.652981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.653035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.653216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.653270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.653499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.653528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.653716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.653772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.654051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.654118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.654358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.654386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.654590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.654619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.654781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.654836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.655093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.655121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.655357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.655392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.655647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.655675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.655942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.655969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.656229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.656287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.656492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.656520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.656702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.656730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.656951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.657005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.657214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.657267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.657513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.657541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.657746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.657798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.658012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.658067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.658293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.658327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.658548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.658577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.658741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.658774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.658993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.659020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.659232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.976 [2024-07-26 14:25:28.659266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.976 qpair failed and we were unable to recover it. 00:31:11.976 [2024-07-26 14:25:28.659471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.659523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.659732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.659759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.659949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.660003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.660231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.660265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.660500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.660529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.660760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.660820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.661055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.661107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.661326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.661354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.661568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.661597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.661795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.661850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.662047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.662074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.662282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.662316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.662499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.662527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.662686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.662714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.662907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.662962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.663200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.663256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.663482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.663510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.663701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.663755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.663958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.664013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.664192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.664221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.664418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.664463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.664676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.664723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.664942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.664970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.665189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.665243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.665486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.665548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.665728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.665755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.665977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.666030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.666251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.666307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.666506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.666535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.666705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.666758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.667000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.667055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.667256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.667283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.667529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.667567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.977 qpair failed and we were unable to recover it. 00:31:11.977 [2024-07-26 14:25:28.667762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.977 [2024-07-26 14:25:28.667823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.668041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.668069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.668288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.668321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.668524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.668568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.668717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.668745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.668935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.668990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.669206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.669241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.669479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.669508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.669699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.669727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.669971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.670026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.670232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.670260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.670496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.670525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.670710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.670759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.670966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.670994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.671223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.671284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.671488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.671517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.671693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.671722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.671906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.671961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.672196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.672258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.672474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.672502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.672659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.672706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.672916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.672972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.673196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.673224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.673446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.673491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.673674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.673702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.673910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.673937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.674099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.674153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.674380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.674413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.674640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.978 [2024-07-26 14:25:28.674667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.978 qpair failed and we were unable to recover it. 00:31:11.978 [2024-07-26 14:25:28.674864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.674917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.675154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.675209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.675475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.675509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.675699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.675727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.675946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.676001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.676214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.676242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.676439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.676474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.676713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.676748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.676983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.677011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.677225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.677280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.677510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.677538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.677752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.677780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.677973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.678028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.678265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.678331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.678498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.678527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.678764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.678827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.679074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.679129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.679344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.679372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.679568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.679596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.679822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.679883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.680100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.680128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.680338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.680372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.680600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.680629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.680800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.680828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.681042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.681094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.681323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.681358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.681520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.681549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.681782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.681849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.682058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.682111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.682336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.682364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.682574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.682603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.682765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.682820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.683028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.683055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.979 [2024-07-26 14:25:28.683219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.979 [2024-07-26 14:25:28.683253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.979 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.683475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.683521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.683729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.683757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.683948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.684004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.684246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.684302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.684517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.684546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.684713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.684773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.685005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.685066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.685280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.685308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.685501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.685561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.685710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.685756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.685976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.686004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.686226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.686289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.686448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.686497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.686709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.686738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.686999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.687066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.687285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.687320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.687521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.687550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.687752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.687806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.688016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.688072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.688295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.688330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.688531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.688560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.688800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.688865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.689068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.689097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.689284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.689319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.689545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.689574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.689788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.689816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.690059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.690118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.690306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.690341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.690551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.690579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.690783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.690839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.691084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.691137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.691328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.691356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.691536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.691565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.980 qpair failed and we were unable to recover it. 00:31:11.980 [2024-07-26 14:25:28.691769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.980 [2024-07-26 14:25:28.691824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.692017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.692044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.692268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.692302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.692495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.692525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.692707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.692735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.692956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.693017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.693224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.693258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.693470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.693498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.693694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.693759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.693968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.694024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.694245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.694273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.694507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.694537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.694716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.694761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.694946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.694973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.695201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.695258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.695505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.695538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.695684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.695712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.695904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.695960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.696193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.696255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.696494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.696522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.696678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.696706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.696924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.696979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.697232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.697259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.697475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.697523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.697677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.697705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.697874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.697902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.698120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.698172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.698364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.698398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.698585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.698614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.698814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.698869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.699113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.699178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.699397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.699425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.699605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.699633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.981 [2024-07-26 14:25:28.699833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.981 [2024-07-26 14:25:28.699887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.981 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.700123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.700151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.700344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.700376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.700571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.700599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.700851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.700897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.701089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.701142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.701336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.701369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.701613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.701640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.701813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.701871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.702115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.702170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.702389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.702417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.702593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.702621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.702788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.702848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.703048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.703076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.703262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.703296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.703489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.703518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.703726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.703754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.703969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.704026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.704207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.704241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.704478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.704506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.704712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.704769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.705004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.705062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.705362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.705401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.705629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.705657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.705892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.705957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.706343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.706397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.706626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.706654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.706945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.706979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.707339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.707401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.707647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.707683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.707965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.708023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.708214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.708247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.708482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.708517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.708714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.982 [2024-07-26 14:25:28.708768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.982 qpair failed and we were unable to recover it. 00:31:11.982 [2024-07-26 14:25:28.708994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.709021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.709235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.709291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.709554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.709588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.709837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.709864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.710166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.710230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.710501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.710536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.710711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.710739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.710982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.711050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.711310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.711366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.711567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.711595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.711799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.711854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.712125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.712178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.712449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.712477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.712704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.712759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.713029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.713091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.713275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.713303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.713534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.713591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.713805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.713858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.714143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.714191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.714440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.714486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.714637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.714664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.714931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.714959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.715295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.715348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.715617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.715651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.715815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.715842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.716065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.716125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.716328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.716362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.716615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.716644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.716842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.983 [2024-07-26 14:25:28.716900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.983 qpair failed and we were unable to recover it. 00:31:11.983 [2024-07-26 14:25:28.717196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.717253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.717490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.717526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.717738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.717804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.718046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.718101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.718384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.718411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.718601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.718629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.718835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.718889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.719147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.719175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.719351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.719385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.719554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.719583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.719773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.719801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.720003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.720058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.720325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.720358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.720556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.720584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.720781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.720835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.721036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.721091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.721293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.721321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.721534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.721588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.721789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.721848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.722034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.722062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.722312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.722346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.722592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.722648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.722872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.722899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.723117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.723182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.723409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.723451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.723679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.723707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.723969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.724035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.724240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.724292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.724518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.724547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.724760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.724815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.725053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.725105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.725413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.725465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.725644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.725671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.984 [2024-07-26 14:25:28.725965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.984 [2024-07-26 14:25:28.726029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.984 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.726305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.726332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.726575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.726603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.726827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.726879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.727107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.727134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.727350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.727384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.727783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.727844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.728127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.728157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.728416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.728464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.728727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.728761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.728983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.729011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.729257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.729316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.729535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.729585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.729765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.729793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.730074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.730129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.730344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.730378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.730587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.730615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.730902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.730964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.731264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.731331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.731565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.731594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.731822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.731877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.732129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.732181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.732445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.732473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.732709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.732763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.733067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.733121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.733332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.733360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.733542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.733571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.733769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.733824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.734096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.734124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.734364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.734397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.734637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.734665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.734873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.734901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.735117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.735172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.735419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.985 [2024-07-26 14:25:28.735464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.985 qpair failed and we were unable to recover it. 00:31:11.985 [2024-07-26 14:25:28.735700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.735728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.736002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.736058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.736399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.736461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.736696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.736724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.736999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.737054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.737366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.737424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.737682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.737710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.737912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.737966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.738212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.738266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.738525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.738553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.738751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.738806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.739066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.739122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.739395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.739435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.739604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.739637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.739971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.740022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.740307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.740335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.740571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.740605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.740890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.740954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.741305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.741367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.741643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.741671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.741918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.741973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.742157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.742185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.742369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.742403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.742625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.742654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.742836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.742863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.743055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.743109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.743343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.743378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.743635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.743664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.743836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.743864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.744091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.744150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.744342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.744376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.744615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.986 [2024-07-26 14:25:28.744644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.986 qpair failed and we were unable to recover it. 00:31:11.986 [2024-07-26 14:25:28.744853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.744881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.745121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.745182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.745405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.745449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.745708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.745742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.745972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.745999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.746201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.746254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.746478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.746514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.746709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.746763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.747109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.747172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.747498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.747533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.747772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.747826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.748128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.748184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.748441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.748470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.748696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.748730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.749014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.749071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.749343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.749397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.749643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.749671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.749886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.749941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.750149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.750203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.750426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.750471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.750691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.750724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.751068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.751127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.751419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.751494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.751711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.751759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.752050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.752093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.752376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.752446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.752680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.752707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.753036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.753087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.753380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.753422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.753678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.753724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.754016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.754077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.754370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.754455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.754657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.754686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.987 [2024-07-26 14:25:28.754989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.987 [2024-07-26 14:25:28.755047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.987 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.755384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.755448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.755643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.755670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.755906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.755934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.756186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.756245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.756410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.756454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.756663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.756705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.757014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.757061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.757334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.757391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.757590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.757619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.757850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.757915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.758150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.758177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.758438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.758473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.758712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.758746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.759145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.759199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.759421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.759456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.759685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.759719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.759997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.760063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.760317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.760373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.760584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.760612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.760955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.761009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.761277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.761330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.761601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.761630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.761816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.761845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.762122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.762180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.762443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.762478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.762680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.762727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.762948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.988 [2024-07-26 14:25:28.762980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.988 qpair failed and we were unable to recover it. 00:31:11.988 [2024-07-26 14:25:28.763184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.763241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.763446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.763495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.763783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.763819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.764043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.764071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.764303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.764359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.764571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.764599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.764855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.764916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.765131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.765159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.765313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.765347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.765553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.765587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.765808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.765862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.766067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.766095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.766318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.766351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.766556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.766590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.766835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.766896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.767115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.767143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.767327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.767360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.767562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.767621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.767874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.767929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.768168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.768196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.768377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.768411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.768634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.768661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.768886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.768945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.769171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.769199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.769388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.769421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.769599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.769626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.769855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.769914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.770159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.770186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.770402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.770454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.770733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.770766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.771034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.771088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.771320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.771348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.771552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.771592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.989 qpair failed and we were unable to recover it. 00:31:11.989 [2024-07-26 14:25:28.771848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.989 [2024-07-26 14:25:28.771907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.772197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.772263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.772501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.772529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.772794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.772849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.773121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.773179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.773408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.773451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.773658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.773691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.773951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.774004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.774254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.774310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.774585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.774613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.774790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.774818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.775032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.775085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.775291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.775324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.775549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.775601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.775789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.775816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.776074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.776107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.776386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.776420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.776680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.776727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.776930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.776958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.777186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.777243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.777529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.777595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.777871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.777931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.778154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.778181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.778368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.778402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.778679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.778726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.779018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.779077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.779300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.779328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.779544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.779578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.779793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.779849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.780132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.780198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.780461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.780490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.780790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.780850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.781120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.781180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.990 [2024-07-26 14:25:28.781446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.990 [2024-07-26 14:25:28.781481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.990 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.781735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.781777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.781999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.782052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.782335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.782395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.782602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.782630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.782807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.782835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.783027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.783081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.783234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.783288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.783491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.783546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.783742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.783770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.783973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.784027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.784264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.784298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.784500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.784557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.784760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.784792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.784948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.785001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.785143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.785176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.785391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.785425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.785667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.785695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.786000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.786054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.786394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.786453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.786680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.786725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.786936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.786964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.787188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.787249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.787490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.787547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.787804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.787872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.788109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.788137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.788380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.788413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.788676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.788721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.788914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.788968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.789248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.789275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.789553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.789582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.789850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.789902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.790157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.790210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.991 [2024-07-26 14:25:28.790404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.991 [2024-07-26 14:25:28.790439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.991 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.790623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.790650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.790878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.790936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.791211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.791265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.791517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.791546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.791793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.791852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.792091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.792146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.792477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.792512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.792730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.792758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.793035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.793095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.793318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.793352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.793586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.793621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.793861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.793888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.794121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.794175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.794424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.794469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.794689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.794737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.794968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.794996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.795181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.795236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.795442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.795490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.795675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.795722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.796002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.796034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.796224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.796278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.796513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.796548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.796842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.796899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.797127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.797155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.797452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.797486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.797767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.797833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.798098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.798160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.798444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.798472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.798765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.798799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.799032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.799084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.799324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.799380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.799624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.799652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.799882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.799938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.992 qpair failed and we were unable to recover it. 00:31:11.992 [2024-07-26 14:25:28.800226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.992 [2024-07-26 14:25:28.800287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.800543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.800578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.800818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.800845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.801005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.801063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.801302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.801336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.801573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.801626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.801865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.801893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.802135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.802189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.802436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.802471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.802731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.802764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.803006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.803034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.803316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.803375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.803637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.803665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.803961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.804022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.804213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.804241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.804452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.804500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.804754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.804788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.805033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.805086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.805321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.805383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.805635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.805664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.805856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.805910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.806154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.806210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.806468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.806497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.806719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.806772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.807025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.807080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.807364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.807454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.807736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.807781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.808013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.808067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.808315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.808376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.808628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.808656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.808863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.808891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.809134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.809195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.809463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.993 [2024-07-26 14:25:28.809498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.993 qpair failed and we were unable to recover it. 00:31:11.993 [2024-07-26 14:25:28.809757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.809813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.810012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.810039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.810247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.810299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.810532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.810568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.810825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.810883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.811248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.811304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.811557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.811592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.811861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.811925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.812205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.812268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.812548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.812576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.812832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.812888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.813098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.813153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.813399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.813440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.813648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.813675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.813871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.813924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.814127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.814183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.814396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.814440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.814671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.814699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.814892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.814947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.815212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.815273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.815500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.815563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.815776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.815804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.816025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.816079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.816337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.816371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.816654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.816682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.816856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.816884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.817103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.817158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.817388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.817422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.817674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.817721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.817947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.994 [2024-07-26 14:25:28.817974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.994 qpair failed and we were unable to recover it. 00:31:11.994 [2024-07-26 14:25:28.818193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.818247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.818524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.818558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.818824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.818884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.819149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.819177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.819447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.819498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.819754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.819787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.820065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.820131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.820336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.820390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.820640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.820668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.820895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.820949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.821240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.821299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.821579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.821608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.821929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.821982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.822194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.822249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.822483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.822518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.822761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.822789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.823037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.823092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.823342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.823404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.823689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.823738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.824019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.824047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.824331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.824389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.824629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.824657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.824873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.824927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.825191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.825219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.825495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.825530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.825737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.825792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.826083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.826135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.826385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.826413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.826652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.826679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.826910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.826964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.827240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.827303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.827528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.827556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.827781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.827814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.995 qpair failed and we were unable to recover it. 00:31:11.995 [2024-07-26 14:25:28.828075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.995 [2024-07-26 14:25:28.828131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.828373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.828406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.828730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.828778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.829052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.829084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.829463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.829518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.829749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.829783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.830033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.830061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.830281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.830335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.830537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.830572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.830812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.830875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.831160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.996 [2024-07-26 14:25:28.831206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:11.996 qpair failed and we were unable to recover it. 00:31:11.996 [2024-07-26 14:25:28.831453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.272 [2024-07-26 14:25:28.831500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.272 qpair failed and we were unable to recover it. 00:31:12.272 [2024-07-26 14:25:28.831682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.272 [2024-07-26 14:25:28.831730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.272 qpair failed and we were unable to recover it. 00:31:12.272 [2024-07-26 14:25:28.831999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.272 [2024-07-26 14:25:28.832060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.272 qpair failed and we were unable to recover it. 00:31:12.272 [2024-07-26 14:25:28.832305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.272 [2024-07-26 14:25:28.832333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.272 qpair failed and we were unable to recover it. 00:31:12.272 [2024-07-26 14:25:28.832484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.272 [2024-07-26 14:25:28.832531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.272 qpair failed and we were unable to recover it. 00:31:12.272 [2024-07-26 14:25:28.832687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.272 [2024-07-26 14:25:28.832734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.272 qpair failed and we were unable to recover it. 00:31:12.272 [2024-07-26 14:25:28.833028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.272 [2024-07-26 14:25:28.833060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.272 qpair failed and we were unable to recover it. 00:31:12.272 [2024-07-26 14:25:28.833251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.833278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.833461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.833508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.833671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.833717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.833887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.833918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.834125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.834152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.834397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.834441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.834653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.834681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.834913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.834964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.835134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.835161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.835369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.835400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.835632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.835659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.835903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.835957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.836156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.836182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.836368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.836401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.836635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.836663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.836942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.837002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.837244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.837271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.837504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.837540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.837776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.837837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.838103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.838164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.838349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.838384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.838628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.838657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.838945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.839005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.839255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.839307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.839667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.839717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.839967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.840020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.840286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.840340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.840601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.840636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.840845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.840873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.841086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.841139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.841362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.841396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.841594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.841622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.841812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.273 [2024-07-26 14:25:28.841839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.273 qpair failed and we were unable to recover it. 00:31:12.273 [2024-07-26 14:25:28.842067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.842129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.842398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.842441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.842682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.842729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.843010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.843038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.843330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.843390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.843685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.843735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.843971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.844023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.844216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.844243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.844463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.844498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.844788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.844850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.845136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.845190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.845418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.845454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.845680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.845713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.845960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.846026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.846319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.846383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.846666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.846708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.846959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.847022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.847241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.847296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.847519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.847554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.847771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.847799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.848041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.848105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.848336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.848369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.848612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.848641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.848847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.848875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.849099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.849160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.849392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.849426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.849652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.849684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.849900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.849928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.850131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.850186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.850415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.850460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.850674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.850719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.851001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.851029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.851278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.274 [2024-07-26 14:25:28.851332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.274 qpair failed and we were unable to recover it. 00:31:12.274 [2024-07-26 14:25:28.851552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.851581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.851778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.851833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.852055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.852083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.852270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.852304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.852502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.852531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.852746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.852817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.853010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.853038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.853227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.853262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.853446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.853500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.853723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.853776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.854016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.854044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.854270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.854304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.854459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.854494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.854738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.854801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.855000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.855028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.855219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.855253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.855481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.855516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.855758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.855813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.856048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.856076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.856247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.856288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.856535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.856591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.856833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.856887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.857083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.857110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.857328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.857362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.857585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.857642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.857907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.857962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.858138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.858166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.275 [2024-07-26 14:25:28.858371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.275 [2024-07-26 14:25:28.858405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.275 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.858638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.858666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.858881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.858934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.859099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.859126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.859340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.859375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.859649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.859677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.859908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.859968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.860230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.860257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.860467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.860502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.860739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.860796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.860988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.861042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.861240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.861268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.861486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.861521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.861770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.861831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.862085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.862139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.862329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.862357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.862551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.862585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.862848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.862910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.863166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.863218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.863510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.863539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.863761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.863816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.864096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.864153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.864371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.864404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.864797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.864851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.865138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.865194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.865449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.865504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.865776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.865811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.866042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.866070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.866318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.866383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.866658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.866687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.866948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.867018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.867280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.867307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.867509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.867544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.867799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.276 [2024-07-26 14:25:28.867859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.276 qpair failed and we were unable to recover it. 00:31:12.276 [2024-07-26 14:25:28.868109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.868164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.868439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.868468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.868622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.868650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.868905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.868971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.869199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.869254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.869453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.869498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.869768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.869803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.870042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.870098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.870379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.870452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.870724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.870769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.871008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.871063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.871358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.871415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.871686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.871739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.872000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.872028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.872232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.872288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.872561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.872597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.872842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.872898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.873156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.873184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.873406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.873448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.873609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.873637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.873807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.873864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.874057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.874083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.874311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.874367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.874622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.874650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.874925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.874984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.875204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.875231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.875502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.875550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.875792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.875847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.876128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.876185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.876449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.876478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.876711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.876746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.877016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.877074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.877311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.877365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.277 qpair failed and we were unable to recover it. 00:31:12.277 [2024-07-26 14:25:28.877623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.277 [2024-07-26 14:25:28.877651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.877872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.877926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.878213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.878271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.878503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.878537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.878757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.878785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.879071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.879129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.879397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.879449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.879716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.879750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.879978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.880006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.880227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.880289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.880563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.880598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.880870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.880937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.881160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.881189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.881403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.881445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.881716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.881751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.881984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.882045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.882268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.882296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.882443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.882492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.882681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.882728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.882995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.883066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.883301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.883356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.883590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.883619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.883824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.883857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.884152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.884209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.884443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.884472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.884681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.884724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.884964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.278 [2024-07-26 14:25:28.885029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.278 qpair failed and we were unable to recover it. 00:31:12.278 [2024-07-26 14:25:28.885259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.885316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.885543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.885572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.885757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.885789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.886024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.886079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.886300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.886334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.886541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.886569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.886762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.886816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.887022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.887077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.887310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.887344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.887532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.887561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.887788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.887847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.888134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.888194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.888416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.888460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.888698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.888726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.888925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.888979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.889222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.889287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.889518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.889574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.889822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.889850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.890077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.890131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.890364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.890399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.890688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.890736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.891043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.891091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.891456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.891494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.891683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.891726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.891959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.892022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.892291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.892320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.892558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.892593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.892808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.892861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.893101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.893157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.893377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.893404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.893624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.893652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.893911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.893971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.894251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.279 [2024-07-26 14:25:28.894318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.279 qpair failed and we were unable to recover it. 00:31:12.279 [2024-07-26 14:25:28.894523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.894551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.894756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.894811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.895032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.895085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.895301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.895336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.895551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.895579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.895808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.895866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.896147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.896202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.896454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.896507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.896715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.896743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.896998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.897053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.897330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.897395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.897646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.897674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.897857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.897885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.898121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.898188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.898371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.898406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.898614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.898648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.898841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.898869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.899029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.899085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.899311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.899345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.899534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.899587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.899868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.899896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.900144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.900199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.900419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.900461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.900705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.900739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.900973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.901001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.901293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.901354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.901619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.901654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.901893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.901956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.902210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.902238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.902451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.902486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.902744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.902811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.903070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.903125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.903294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.280 [2024-07-26 14:25:28.903322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.280 qpair failed and we were unable to recover it. 00:31:12.280 [2024-07-26 14:25:28.903500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.903535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.903729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.903795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.904004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.904058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.904297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.904325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.904522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.904579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.904816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.904872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.905120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.905179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.905395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.905424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.905638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.905682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.905897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.905950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.906191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.906246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.906513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.906541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.906787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.906846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.907095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.907148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.907392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.907426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.907672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.907699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.907968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.908020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.908259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.908313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.908595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.908631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.908861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.908889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.909212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.909267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.909558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.909622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.909856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.909918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.910149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.910178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.910445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.910480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.910719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.910754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.910989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.911047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.911281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.911334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.911585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.911613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.911813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.911868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.912190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.912244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.912538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.912587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.912803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.912857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.281 qpair failed and we were unable to recover it. 00:31:12.281 [2024-07-26 14:25:28.913105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.281 [2024-07-26 14:25:28.913160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.913413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.913457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.913685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.913713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.913958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.914016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.914309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.914378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.914663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.914707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.914974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.915002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.915224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.915278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.915504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.915538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.915797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.915855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.916067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.916095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.916302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.916336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.916565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.916594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.916827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.916898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.917130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.917158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.917367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.917403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.917624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.917652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.917844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.917900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.918119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.918148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.918333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.918367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.918574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.918603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.918757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.918807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.919050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.919078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.919273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.919312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.919544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.919607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.919844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.919901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.920075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.920103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.920295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.920329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.920533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.920588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.920839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.920901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.921092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.921120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.921302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.921335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.921548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.921602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.282 [2024-07-26 14:25:28.921850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.282 [2024-07-26 14:25:28.921921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.282 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.922124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.922152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.922365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.922398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.922628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.922655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.922899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.922967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.923153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.923181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.923368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.923403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.923651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.923679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.923994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.924058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.924290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.924318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.924548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.924584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.924839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.924903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.925179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.925238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.925438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.925467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.925692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.925727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.925962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.926019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2644200 Killed "${NVMF_APP[@]}" "$@" 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.926304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.926363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.926593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:12.283 [2024-07-26 14:25:28.926621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.926799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.926856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:12.283 [2024-07-26 14:25:28.927174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.927228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:12.283 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.283 [2024-07-26 14:25:28.927533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.927561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.927780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.927808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.928084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.928139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.928410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.928455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.928670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.928698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.928918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.928946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.929138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.929194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.283 qpair failed and we were unable to recover it. 00:31:12.283 [2024-07-26 14:25:28.929417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.283 [2024-07-26 14:25:28.929477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.929697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.929740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.929965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.929993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.930219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.930273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.930505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.930535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.930747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.930820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.931082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.931110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.931329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.931362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.931561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.931590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.931792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.931852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.932076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.932104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.932302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.932337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2644765 00:31:12.284 [2024-07-26 14:25:28.932509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.932539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2644765 00:31:12.284 [2024-07-26 14:25:28.932688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.932717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2644765 ']' 00:31:12.284 [2024-07-26 14:25:28.932902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.932931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:12.284 [2024-07-26 14:25:28.933166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.933224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.284 [2024-07-26 14:25:28.933457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:12.284 [2024-07-26 14:25:28.933507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 14:25:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.284 [2024-07-26 14:25:28.933692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.933740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.933961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.933990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.934187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.934215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.934361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.934388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.934601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.934630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.934802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.934830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.935151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.935180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.935402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.935441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.935588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.935617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.935804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.935840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.936149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.936177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.936409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.936455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.936610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.936639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.936853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.936881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.937071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.284 [2024-07-26 14:25:28.937126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.284 qpair failed and we were unable to recover it. 00:31:12.284 [2024-07-26 14:25:28.937320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.937355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.937552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.937581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.937790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.937817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.938182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.938235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.938405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.938449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.938657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.938705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.938881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.938909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.939062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.939127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.939291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.939324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.939519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.939548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.939778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.939832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.940084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.940118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.940395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.940436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.940605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.940632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.940862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.940890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.941136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.941200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.941437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.941485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.941707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.941741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.942021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.942049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.942274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.942327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.942520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.942549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.942779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.942833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.943147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.943201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.943451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.943499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.943710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.943763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.944117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.944171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.944448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.944501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.944701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.944736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.945021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.945075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.945344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.945399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.945620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.945649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.945863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.945920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.946145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.946200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.946369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.946403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.946624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.946653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.946906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.285 [2024-07-26 14:25:28.946962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.285 qpair failed and we were unable to recover it. 00:31:12.285 [2024-07-26 14:25:28.947198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.947250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.947502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.947531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.947691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.947720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.947937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.947993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.948296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.948356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.948615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.948644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.948834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.948863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.949069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.949123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.949369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.949403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.949630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.949658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.949866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.949895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.950096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.950157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.950400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.950443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.950620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.950648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.950878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.950906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.951123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.951179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.951443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.951490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.951644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.951687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.951881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.951909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.952125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.952176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.952371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.952406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.952624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.952652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.952884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.952912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.953125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.953178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.953394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.953444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.953703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.953737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.953903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.953931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.954147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.954200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.954447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.954493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.954723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.954758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.954988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.955016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.955245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.955314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.956006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.956060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.956365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.956420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.956644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.956673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.956897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.956959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.957184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.957237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.286 [2024-07-26 14:25:28.957423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.286 [2024-07-26 14:25:28.957467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.286 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.957672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.957700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.957932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.957999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.958237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.958293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.958510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.958540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.958750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.958779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.958976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.959034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.959255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.959312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.959507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.959536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.959731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.959770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.960014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.960075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.960265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.960300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.960520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.960549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.960746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.960775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.960955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.961017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.961256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.961291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.961518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.961566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.961818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.961865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.962190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.962249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.962518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.962546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.962751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.962811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.963075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.963103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.963319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.963353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.963552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.963581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.963813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.963879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.964062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.964090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.964285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.964319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.964515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.964544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.964743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.964803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.965094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.965139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.965309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.965343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.965550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.965580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.965749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.965814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.966061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.966090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.966298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.966338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.966561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.966590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.966761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.966818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.967054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.967082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.967282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.967316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.287 qpair failed and we were unable to recover it. 00:31:12.287 [2024-07-26 14:25:28.967534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.287 [2024-07-26 14:25:28.967562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.967787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.967847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.968102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.968131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.968356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.968390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.968602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.968630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.968829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.968885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.969091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.969118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.969330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.969367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.969623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.969652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.969859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.969918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.970147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.970184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.970394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.970437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.970645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.970673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.970919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.970973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.971166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.971194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.971373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.971411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.971629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.971657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.971833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.971888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.972090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.972118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.972318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.972353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.972543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.972571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.972860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.972915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.973117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.973145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.973377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.973411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.973597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.973625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.973841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.973896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.974195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.974239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.974469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.974513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.974691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.974741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.975004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.975071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.975316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.975345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.975625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.975653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.975874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.975929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.288 [2024-07-26 14:25:28.976145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.288 [2024-07-26 14:25:28.976201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.288 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.976397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.976425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.976627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.976655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.976927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.976983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.977158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.977214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.977446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.977475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.977630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.977657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.977908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.977965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.978201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.978255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.978502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.978538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.978767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.978824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.979055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.979116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.979451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.979499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.979661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.979689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.979954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.980018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.980297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.980354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.980622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.980650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.980829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.980857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.981057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.981111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.981271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.981306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.981499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.981527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.981711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.981738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.981924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.981978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.982193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.982245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.982474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.982502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.982709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.982736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.982983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.983040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.983266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.983320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.983567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.983595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.983769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.983796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.984018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.984071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.984236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.984269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.984511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.984538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.984726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.984754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.984999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.985063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.985258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.985292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.985518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.985547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.985722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.985750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.985926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.289 [2024-07-26 14:25:28.985981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.289 qpair failed and we were unable to recover it. 00:31:12.289 [2024-07-26 14:25:28.986248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.986282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.986452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.986500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.986691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.986719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.986967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.987025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.987272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.987306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.987502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.987553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.987730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.987758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.987950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.988005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.988191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.988226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.988451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.988501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.988711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.988744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.989024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.989085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.989337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.989371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.989572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.989601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.989890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.989955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.990230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.990287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.990512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.990540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.990774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.990846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.991074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.991102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.991316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.991351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.991582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.991611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.991814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.991870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.992058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.992085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.992270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.992304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.992576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.992605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.992796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.992852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.993096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.993123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.993289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.993324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.993552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.993581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.993770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.993827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.993998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.994027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.994243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.994277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.994545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.994574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.994728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.994773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.994974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.995002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.995217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.995251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.995560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.995590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.995822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.995895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.290 [2024-07-26 14:25:28.996157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.290 [2024-07-26 14:25:28.996185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.290 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.996391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.996425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.996668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.996712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.996920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.996972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.997185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.997213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.997435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.997470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.997722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.997755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.997998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.998058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.998265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.998293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.998556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.998584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.998829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.998892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.999135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.999191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.999460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.999510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:28.999732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:28.999759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.000008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.000066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.000354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.000418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.000728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.000774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.001037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.001091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.001316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.001371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.001615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.001644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.001854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.001883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.002103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.002156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.002425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.002469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.002672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.002717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.002937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.002966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.003177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.003228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.003442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.003425] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:31:12.291 [2024-07-26 14:25:29.003491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.003547] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.291 [2024-07-26 14:25:29.003741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.003775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.003989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.004015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.004216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.004271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.004502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.004530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.004714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.004761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.005040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.005068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.005314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.005348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.005591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.005620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.005794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.005848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.006070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.006098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.006307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.006342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.006539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.006568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.006752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.006805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.291 qpair failed and we were unable to recover it. 00:31:12.291 [2024-07-26 14:25:29.007036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.291 [2024-07-26 14:25:29.007064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.007251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.007285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.007493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.007521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.007678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.007705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.007882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.007910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.008140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.008195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.008356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.008391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.008619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.008648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.008794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.008822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.009019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.009052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.009269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.009304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.009575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.009637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.009849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.009885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.010112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.010175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.010398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.010442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.010650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.010678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.010858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.010886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.011144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.011205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.011435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.011471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.011781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.011847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.012105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.012133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.012309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.012344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.012531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.012560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.012755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.012809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.013076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.013109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.013313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.013348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.013540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.013569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.013752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.013806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.014022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.014051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.014259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.014316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.014538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.014566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.014805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.014862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.015119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.015147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.015375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.015409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.015705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.015740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.292 qpair failed and we were unable to recover it. 00:31:12.292 [2024-07-26 14:25:29.016013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.292 [2024-07-26 14:25:29.016071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.016311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.016339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.016631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.016659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.016893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.016947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.017212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.017274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.017554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.017583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.017827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.017892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.018127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.018183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.018469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.018498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.018748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.018794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.019060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.019120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.019393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.019436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.019677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.019722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.019944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.019972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.020197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.020250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.020506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.020534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.020729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.020791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.021028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.021055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.021327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.021381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.021583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.021612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.021834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.021888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.022107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.022134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.022333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.022368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.022577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.022605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.022781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.022836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.023058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.023085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.023301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.023335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.023563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.023592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.023789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.023845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.024044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.024076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.024268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.024301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.024540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.024568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.024791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.024861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.025054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.025082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.025267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.025300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.025497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.025525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.025743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.025798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.026026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.026053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.026219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.026253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.026436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.026486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.026649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.026677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.293 qpair failed and we were unable to recover it. 00:31:12.293 [2024-07-26 14:25:29.026883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.293 [2024-07-26 14:25:29.026910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.027078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.027132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.027331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.027365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.027565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.027593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.027797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.027826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.028054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.028112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.028353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.028387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.028730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.028791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.029018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.029046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.029317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.029370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.029620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.029648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.029821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.029876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.030097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.030124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.030392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.030435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.030657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.030685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.030905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.030975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.031229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.031257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.031459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.031514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.031725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.031773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.032078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.032133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.032358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.032386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.032610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.032638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.032866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.032921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.033203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.033262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.033465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.033494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.033702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.033767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.033984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.034040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.034268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.034302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.034516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.034550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.034754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.034809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.035055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.035109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.035284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.035319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.035520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.035549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.035756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.035810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.036029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.036082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.036290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.036324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.036549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.036579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.036776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.036830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.037066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.037127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.037333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.037367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.037580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.037609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.037783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.294 [2024-07-26 14:25:29.037839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.294 qpair failed and we were unable to recover it. 00:31:12.294 [2024-07-26 14:25:29.038086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.038153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.038425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.038469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.038680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.038708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.038939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.038995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.039248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.039304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.039585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.039614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.039836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.039891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.040118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.040173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.040339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.040374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.040621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.040650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.040828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.040881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.041040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.041094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.041287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.041321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.041544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.041573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.041777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.041837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.042086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.042141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.042327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.042355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.042504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.042533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.042708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.042775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.042960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.043014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.043237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.043265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.043487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.043516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.043743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.043806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.044046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.044102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.044308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.044336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.044537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.044565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.044732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.044803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.045040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.045097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.045291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.045319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.045503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.045531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.045747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.045810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.046014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.046068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.046259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.046286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.046502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.046558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.046715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.046763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.046973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.047028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.047253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.047281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.047514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.047557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.047754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.047823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.048124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.048189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.048502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.048533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.295 qpair failed and we were unable to recover it. 00:31:12.295 [2024-07-26 14:25:29.048743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.295 [2024-07-26 14:25:29.048783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.049167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.049247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.049616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.049655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.049894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.049936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.050315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.050407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.050670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.050711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.051057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.051156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.051519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.051562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.051808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.051848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.052076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.052112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.052339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.052374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.052623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.052653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.052896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.052932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.053204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.053239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.053542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.053572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.053729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.053758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.054015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.054051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.054266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.054303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.054549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.054579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.054754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.054783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.054972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.055009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.055205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.055241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.055445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.055481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.055695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.055724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.055927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.055963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.056184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.056225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.056450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.056501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.056705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.056734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.056936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.056971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.057187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.057250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.057501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.057530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.057722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.057751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.057998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.058033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.058257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.058322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.296 [2024-07-26 14:25:29.058580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.058611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.058820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.058849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.059084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.296 [2024-07-26 14:25:29.059149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.296 qpair failed and we were unable to recover it. 00:31:12.296 [2024-07-26 14:25:29.059403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.059491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.059692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.059721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.059941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.059970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.060197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.060261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.060499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.060536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.060771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.060837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.061095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.061124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.061337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.061402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.061642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.061671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.061907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.061971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.062263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.062291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.062505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.062534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.062724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.062786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.063139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.063173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.063408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.063443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.063643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.063672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.063872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.063908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.064126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.064162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.064349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.064378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.064574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.064605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.064774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.064808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.065027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.065062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.065323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.065351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.065565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.065594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.065788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.065853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.066166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.066230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.066529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.066558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.066756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.066792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.067078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.067152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.069233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.069312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.069620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.069651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.069820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.069855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.070048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.070112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.070403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.070520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.070713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.070741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.070923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.070958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.071181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.071245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.071505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.071536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.071727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.071757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.072006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.072041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.072246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.072310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.072613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.072643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.072834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.072863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.073060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.297 [2024-07-26 14:25:29.073095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.297 qpair failed and we were unable to recover it. 00:31:12.297 [2024-07-26 14:25:29.073287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.073351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.073669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.073699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.073919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.073947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.074089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.074124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.074371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.074451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.074737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.074800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.075113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.075141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.075466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.075512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.075760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.075824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.076141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.076205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.076533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.076562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.076748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.076783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.077009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.077074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.077303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.077368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.077677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.077707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.077882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.077921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.078109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.078139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.078350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.078379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.078571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.078601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.078792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.078820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.079035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.079063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.079259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.079304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.079499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.079529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.079727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.079755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.079940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.079973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.080152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.080180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.080398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.080433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.080577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.080606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.080819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.080848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.081118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.081162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.081375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.081404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.081579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.081615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.081798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.081827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.081987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.082016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.082221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.082249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.082391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.082420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.082569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.082598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.082767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.082796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.082960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.082989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.084278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.084352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.084632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.084662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.084870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.084899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.085109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.085138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.085327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.298 [2024-07-26 14:25:29.085355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.298 qpair failed and we were unable to recover it. 00:31:12.298 [2024-07-26 14:25:29.085553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.085583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.085769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.085799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.085994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.086024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.086227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.086256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.086454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.086483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.086653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.086687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.086895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.086924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.087107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.087136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.087318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.087347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.087562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.087591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.087787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.087817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.088013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.088042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.088259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.088288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.088513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.088542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.089734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.089831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.090079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.090115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.090334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.090399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.090621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.090649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.090839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.090868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.091036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.091065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.091253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.091285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.091510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.091538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.091695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.091723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.091924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.091954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.092169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.092197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.092490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.092519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.092681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.092709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.092848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.092875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.093042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.093070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.093234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.093262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.093439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.093477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.093653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.093682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.093821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.093860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.094042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.094069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.094255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.094285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.094482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.094511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.094658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.094686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.094846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.094874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.095109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.095156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.095396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.095490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.095625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.095653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.095840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.095869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.096058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.299 [2024-07-26 14:25:29.096086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.299 qpair failed and we were unable to recover it. 00:31:12.299 [2024-07-26 14:25:29.096301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.096330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.096485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.096514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.096655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.096690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.096871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.096899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.097056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.097085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.097263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.097291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.097448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.097478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.097642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.097670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.097854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.097884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.098064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.098093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.098299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.098328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.098522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.098551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.098746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.098776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.098979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.099007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.099202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.099231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.099405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.099442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.099616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.099645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.099817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.099849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.100042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.100070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.100266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.100294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.100506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.100535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.100673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.100702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.100876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.100905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.101085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.101112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.101304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.101331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.101505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.101533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.101697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.101726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.101937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.101966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.102142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.102172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.102388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.102416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.102597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.102625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.102822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.102850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.103037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.103071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.103296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.103360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.104650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.104682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.104869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.300 [2024-07-26 14:25:29.104881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.104910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.105142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.105209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.105462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.105509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.105684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.105737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.106033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.106063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.300 [2024-07-26 14:25:29.106291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.300 [2024-07-26 14:25:29.106355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.300 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.106623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.106653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.106828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.106892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.107180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.107208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.107384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.107494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.107671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.107710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.107891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.107919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.108146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.108174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.108342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.108371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.108557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.108586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.108779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.108807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.109039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.109106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.109359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.109405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.109623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.109653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.109875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.109908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.110183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.110212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.110420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.110507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f226c000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.110719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.110829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.111134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.111201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.111509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.111540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.111702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.111732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.111914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.111942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.112162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.112190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.112380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.112408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.112612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.112640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.112858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.112887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.113109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.113138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.113348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.113417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.113644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.113674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.113829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.113857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.114066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.114096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.114315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.114344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.114544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.114573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.114768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.114797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.114948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.114976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.115185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.115213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.115391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.115420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.115610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.115639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.116765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.116842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.117090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.117119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.118575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.118609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.118800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.118830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.119903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.119977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.120267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.120298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.120501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.120533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.120692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.120720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.120873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.301 [2024-07-26 14:25:29.120901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.301 qpair failed and we were unable to recover it. 00:31:12.301 [2024-07-26 14:25:29.121049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.121078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.121291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.121320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.121541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.121571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.121740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.121804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.122087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.122115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.122292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.122356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.122590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.122619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.122874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.122937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.123246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.123310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.123558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.123587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.123787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.123861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.124124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.124193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.124481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.124510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.124698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.124745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.124963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.125026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.125292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.125354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.125608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.125636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.125857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.125892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.126150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.126213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.126494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.126524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.126694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.126732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.126911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.126944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.127163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.127197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.127391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.127424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.127663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.127697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.127901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.127936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.128145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.128210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.128500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.128531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.128695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.128724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.128940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.128975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.130363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.130456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.130705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.130774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.131057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.131090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.131308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.131343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.131591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.131619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.131829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.131893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.132138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.132167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.132364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.132400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.132604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.132633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.132851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.132914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.133192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.133220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.133425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.133490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.134855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.134929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.135232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.135308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.135575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.135606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.135853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.135888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.136188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.302 [2024-07-26 14:25:29.136251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.302 qpair failed and we were unable to recover it. 00:31:12.302 [2024-07-26 14:25:29.136511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.136541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.136752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.136780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.136987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.137020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.137227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.137266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.137504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.137533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.137743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.137771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.137975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.138009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.138244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.138299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.138567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.138595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.138773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.138801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.139000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.139035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.139240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.139273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.139454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.139501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.139698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.139726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.139892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.139941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.140131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.140165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.140414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.140533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.140739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.140768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.140998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.141034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.141252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.141286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.141509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.141538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.303 [2024-07-26 14:25:29.141689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.303 [2024-07-26 14:25:29.141717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.303 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.141919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.580 [2024-07-26 14:25:29.141954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.580 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.142180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.580 [2024-07-26 14:25:29.142243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.580 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.142461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.580 [2024-07-26 14:25:29.142511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.580 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.142719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.580 [2024-07-26 14:25:29.142748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.580 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.142929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.580 [2024-07-26 14:25:29.142957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.580 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.143157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.580 [2024-07-26 14:25:29.143189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.580 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.143376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.580 [2024-07-26 14:25:29.143409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.580 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.143617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.580 [2024-07-26 14:25:29.143645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.580 qpair failed and we were unable to recover it. 00:31:12.580 [2024-07-26 14:25:29.143865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.143899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.144062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.144096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.145016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.145079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.145298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.145328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.145519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.145550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.145759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.145793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.145961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.145995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.146210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.146239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.147485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.147518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.147719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.147797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.148055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.148120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.148375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.148404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.148590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.148620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.148816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.148892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.149152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.149216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.149474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.149504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.149658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.149706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.149920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.149984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.150249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.150311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.150553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.150582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.150729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.150764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.150941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.151004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.151260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.151323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.151589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.151618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.151839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.151873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.152085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.152149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.152411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.152493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.152668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.152701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.152892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.152927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.153101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.153165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.153462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.153517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.153732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.153761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.154007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.154042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.581 qpair failed and we were unable to recover it. 00:31:12.581 [2024-07-26 14:25:29.154273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.581 [2024-07-26 14:25:29.154336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.154629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.154658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.154840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.154868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.155087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.155122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.155402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.155493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.155637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.155666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.155902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.155930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.157089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.157163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.157489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.157520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.157662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.157718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.158001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.158029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.158267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.158303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.158527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.158556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.158731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.158795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.159058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.159086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.159258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.159292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.159496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.159525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.159700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.159734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.159948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.159976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.160238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.160273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.160472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.160551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.160859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.160923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.161203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.161231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.161434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.161465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.161614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.161642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.161806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.161834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.162009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.162037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.162189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.162218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.162397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.162425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.162614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.162643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.162862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.162891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.163100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.163128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.163311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.163339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.582 qpair failed and we were unable to recover it. 00:31:12.582 [2024-07-26 14:25:29.163532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.582 [2024-07-26 14:25:29.163562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.163716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.163745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.163912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.163940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.164147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.164176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.164360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.164389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.164589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.164619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.164854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.164890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.165088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.165123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.165364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.165399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.165609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.165638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.165829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.165864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.166055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.166090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.166318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.166383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2274000b90 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.166641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.166685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.166856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.166886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.167054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.167103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.167277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.167330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.167524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.167555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.167728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.167756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.167952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.167981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.168187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.168240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.168397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.168425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.169251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.169284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.169516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.169546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.169688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.169733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.169913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.169966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.170179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.170229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.170388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.170417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.170613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.170641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.170851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.170902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.171092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.171140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.171313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.171341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.171542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.171590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.171784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.583 [2024-07-26 14:25:29.171831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.583 qpair failed and we were unable to recover it. 00:31:12.583 [2024-07-26 14:25:29.172069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.172135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.172342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.172370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.172548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.172598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.172809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.172858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.173061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.173089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.173296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.173324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.173534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.173584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.173743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.173799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.174037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.174088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.174261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.174290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.174453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.174482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.174655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.174702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.174883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.174930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.175149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.175200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.175386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.175415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.175616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.175663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.175867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.175913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.176111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.176162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.176321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.176349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.176555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.176605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.176784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.176831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.177034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.177085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.177241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.177269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.177441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.177470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.177633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.177684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.177895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.177950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.178209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.178273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.178503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.178563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.178819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.178873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.179096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.179148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.179301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.179333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.179516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.179565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.179748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.179793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.584 qpair failed and we were unable to recover it. 00:31:12.584 [2024-07-26 14:25:29.179993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.584 [2024-07-26 14:25:29.180042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.180201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.180233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.180407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.180445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.180630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.180683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.180868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.180919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.181817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.181848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.182083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.182135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.182928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.182960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.183188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.183242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.184024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.184056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.184307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.184357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.185389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.185423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.185633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.185680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.185898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.185974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.186214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.186267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.186459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.186506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.186711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.186759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.186981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.187031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.187236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.187263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.187478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.187532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.187724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.187772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.187992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.188042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.188248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.188299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.188490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.188540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.189357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.189390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.189581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.189636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.189836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.189886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.190142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.190186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-07-26 14:25:29.190396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.585 [2024-07-26 14:25:29.190442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.190591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.190640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.190833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.190866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.191128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.191186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.191373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.191401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.191605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.191652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.191875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.191921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.192121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.192181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.192369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.192397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.192642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.192693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.192897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.192944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.193139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.193185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.193340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.193368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.193565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.193594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.194535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.194568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.194760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.194812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.194981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.195027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.195245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.195293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.195451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.195480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.195676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.195722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.195906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.195952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.196179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.196226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.196438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.196466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.196635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.196683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.196886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.196935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.197202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.197254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.197484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.197515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.197756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.197824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.198007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.198059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.198285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.198340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.198539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.198568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-07-26 14:25:29.198806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.586 [2024-07-26 14:25:29.198866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.199072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.199124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.199329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.199357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.199575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.199604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.199816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.199866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.200067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.200119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.200305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.200332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.200513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.200561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.200760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.200806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.200994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.201042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.201302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.201331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.201471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.201501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.201680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.201726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.201893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.201942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.202168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.202221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.202390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.202419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.202600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.202648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.202851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.202898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.203060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.203108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.203328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.203356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.203519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.203568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.203776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.203825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.203980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.204033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.204200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.204246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.204436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.204466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.204655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.204683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.204829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.204874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.205104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.205154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.205340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.205367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.205525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.205554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.206687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.206722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.206929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.206978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.207857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.207889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.208130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.208183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-07-26 14:25:29.208946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.587 [2024-07-26 14:25:29.208978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.209191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.209243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.210018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.210050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.210268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.210297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.210500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.210536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.210775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.210826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.211021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.211068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.211249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.211278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.211486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.211522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.211761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.211809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.211997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.212044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.212249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.212277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.212516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.212550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.212782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.212849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.213076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.213131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.213308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.213336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.213525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.213573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.213805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.213861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.214095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.214148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.214362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.214390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.214595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.214642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.214839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.214889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.215147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.215212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.215440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.215469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.215615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.215643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.215814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.215865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.216089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.216145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.216335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.216363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.216526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.216555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.216781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.216838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.217033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.217089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.217280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.217309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.217515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.217563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.217758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.588 [2024-07-26 14:25:29.217805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.588 [2024-07-26 14:25:29.218037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.218100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.218355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.218383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.218584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.218630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.218861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.218908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.219090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.219137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.219276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.219304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.219455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.219483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.219659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.219710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.219894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.219940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.220153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.220199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.220412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.220447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.220624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.220673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.220879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.220929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.221119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.221169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.221354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.221382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.221550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.221597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.221852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.221916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.222136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.222188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.222399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.222434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.222626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.222679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.222935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.222981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.223199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.223251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.223438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.223467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.223666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.223716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.223888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.223937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.224472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.224504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.224722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.224753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.224966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.224995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.225155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.225213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.225369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.225397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.225566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.225595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.225783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.225830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.226058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.226111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.226364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.589 [2024-07-26 14:25:29.226391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.589 qpair failed and we were unable to recover it. 00:31:12.589 [2024-07-26 14:25:29.226601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.226630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.226841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.226888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.227053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.227099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.227328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.227357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.227554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.227583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.227792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.227825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.228025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.228068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.228255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.228283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.228509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.228557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.228782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.228830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.229055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.229100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.229247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.229275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.229456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.229508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.229729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.229780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.229977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.230026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.231084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.231117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.231325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.231353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.232219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.232251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.232450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.232480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.233260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.233292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.233487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.233523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.234324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.234356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.234563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.234611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.234815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.234866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.235058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.235110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.235311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.235339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.235538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.235586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.235780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.235829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.236054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.236108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.236318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.236347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.236561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.236611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.236818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.236865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.237076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.237125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.590 [2024-07-26 14:25:29.237330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.590 [2024-07-26 14:25:29.237358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.590 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.237553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.237601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.237811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.237857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.238045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.238096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.238304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.238332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.238519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.238570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.238748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.238783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.239017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.239068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.239243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.239271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.239480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.239510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.239699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.239765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.239985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.240034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.240234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.240285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.240494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.240528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.240736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.240792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.241202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.241232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.241442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.241472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.241659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.241707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.241925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.241975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.242183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.242232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.242420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.242454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.242668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.242696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.242885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.242937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.243170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.243229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.243440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.243473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.243689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.243760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.243987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.244036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.244243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.244294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.244509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.244538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.591 [2024-07-26 14:25:29.244733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.591 [2024-07-26 14:25:29.244785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.591 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.244985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.245036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.245257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.245305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.245491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.245540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.245744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.245794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.246010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.246059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.246245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.246274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.246502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.246531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.246813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.246877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.247086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.247134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.247311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.247340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.247514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.247562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.247811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.247870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.248106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.248156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.248346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.248374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.248619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.248670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.248893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.248942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.249209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.249267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.249456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.249484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.249681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.249727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.249926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.249975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.250189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.250240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.250421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.250459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.250718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.250746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.251005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.251058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.251267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.251315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.251542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.251571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.251721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.251766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.251990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.252047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.252276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.252326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.252519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.252568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.252791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.252843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.253056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.253106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.253325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.253352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.592 [2024-07-26 14:25:29.253612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.592 [2024-07-26 14:25:29.253659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.592 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.253846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.253892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.254118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.254166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.254353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.254381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.254593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.254643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.254853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.254899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.255152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.255200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.255387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.255416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.255620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.255666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.255900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.255934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.256118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.256168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.256374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.256403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.256617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.256663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.256870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.256915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.257104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.257154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.257339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.257371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.257572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.257619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.257833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.257878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.258105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.258169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.258399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.258432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.258638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.258685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.258908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.258955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.259181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.259232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.259423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.259457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.259649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.259698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.259879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.259925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.260123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.260175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.260436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.260464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.260679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.260729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.260959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.261005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.261200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.261251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.261455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.261484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.261693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.261720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.261907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.261953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.593 [2024-07-26 14:25:29.262126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.593 [2024-07-26 14:25:29.262176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.593 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.262355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.262383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.262582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.262611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.262791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.262838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.263017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.263067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.263272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.263320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.263538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.263585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.263771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.263817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.264082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.264140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.264325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.264354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.264590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.264637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.264799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.264845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.265072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.265125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.265272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.265299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.265518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.265566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.265799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.265847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.266081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.266129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.266317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.266345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.266516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.266563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.266762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.266809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.267022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.267069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.267245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.267272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.267489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.267540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.267703] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.594 [2024-07-26 14:25:29.267727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.267765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.594 [2024-07-26 14:25:29.267772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 [2024-07-26 14:25:29.267782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.267796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.594 [2024-07-26 14:25:29.267808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.594 [2024-07-26 14:25:29.267953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.268007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.267972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:31:12.594 [2024-07-26 14:25:29.268053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:31:12.594 [2024-07-26 14:25:29.268107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:31:12.594 [2024-07-26 14:25:29.268110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:12.594 [2024-07-26 14:25:29.268210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.268237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.268396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.268424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.268662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.268712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.268970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.269022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.269232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.269277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.269534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.594 [2024-07-26 14:25:29.269584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.594 qpair failed and we were unable to recover it. 00:31:12.594 [2024-07-26 14:25:29.269805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.269856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.270050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.270099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.270322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.270350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.270495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.270546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.270775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.270822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.270985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.271031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.271244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.271290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.271484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.271518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.271765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.271811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.272042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.272088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.272273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.272301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.272453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.272481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.272650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.272697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.272862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.272907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.273093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.273139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.273328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.273356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.273508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.273555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.273783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.273833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.274024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.274069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.274250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.274278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.274479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.274526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.274709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.274755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.274980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.275028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.275237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.275265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.275420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.275453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.275646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.275679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.275886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.275931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.276120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.276166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.276351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.276378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.276565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.276611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.276803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.276849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.277029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.277074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.277246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.277274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.595 [2024-07-26 14:25:29.277481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.595 [2024-07-26 14:25:29.277527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.595 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.277750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.277796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.277964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.278010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.278244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.278291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.278505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.278551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.278719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.278764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.278986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.279032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.279226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.279253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.279460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.279489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.279683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.279729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.279925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.279970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.280198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.280245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.280456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.280484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.280692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.280738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.280993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.281043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.281226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.281273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.281494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.281522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.281702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.281748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.282022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.282072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.282259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.282287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.282509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.282557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.282748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.282794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.283022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.283068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.283290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.283318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.283540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.283587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.283807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.283853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.284067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.284113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.284324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.284352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.284527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.284574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.284758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.596 [2024-07-26 14:25:29.284804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.596 qpair failed and we were unable to recover it. 00:31:12.596 [2024-07-26 14:25:29.285020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.285065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.285240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.285268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.285483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.285511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.285716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.285761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.285982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.286028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.286195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.286240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.286414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.286451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.286613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.286641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.286855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.286901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.287103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.287149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.287398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.287425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.287602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.287629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.287845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.287890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.288107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.288154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.288376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.288404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.288625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.288653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.288848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.288894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.289109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.289156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.289339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.289367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.289573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.289601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.289833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.289883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.290097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.290143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.290324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.290352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.290568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.290597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.290800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.290845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.291058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.291103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.291285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.291313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.291489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.291562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.291758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.291804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.292018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.292063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.292243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.292271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.292456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.292487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.292666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.292712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.292901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.597 [2024-07-26 14:25:29.292951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.597 qpair failed and we were unable to recover it. 00:31:12.597 [2024-07-26 14:25:29.293147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.293193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.293413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.293457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.293649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.293677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.293840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.293887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.294105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.294150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.294336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.294364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.294574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.294603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.294765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.294811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.295000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.295046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.295233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.295279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.295491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.295536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.295755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.295801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.295972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.296019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.296235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.296263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.296461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.296489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.296700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.296745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.296960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.297005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.297195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.297241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.297449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.297477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.297694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.297743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.297952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.297997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.298204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.298250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.298434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.298462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.298666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.298694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.298889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.298936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.299151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.299196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.299403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.299449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.299641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.299669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.299827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.299874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.300085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.300130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.300288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.300316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.300510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.300539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.300719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.300765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.300944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.598 [2024-07-26 14:25:29.300989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.598 qpair failed and we were unable to recover it. 00:31:12.598 [2024-07-26 14:25:29.301177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.301223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.301447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.301475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.301704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.301749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.301937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.301982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.302191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.302237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.302419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.302453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.302649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.302677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.302869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.302915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.303104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.303150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.303333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.303361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.303533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.303562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.303758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.303804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.304027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.304077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.304289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.304335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.304551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.304597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.304817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.304864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.305057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.305102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.305308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.305336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.305544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.305589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.305811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.305860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.306089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.306135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.306290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.306318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.306530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.306577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.306772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.306816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.307033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.307078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.307287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.307314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.307523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.307569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.307785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.307831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.308043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.308089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.308257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.308284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.308449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.308496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.308720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.308769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.308992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.309038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.599 [2024-07-26 14:25:29.309244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.599 [2024-07-26 14:25:29.309277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.599 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.309484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.309531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.309749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.309798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.310003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.310049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.310227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.310254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.310439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.310467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.310617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.310663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.310844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.310890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.311104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.311149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.311328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.311356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.311533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.311562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.311782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.311830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.312010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.312056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.312211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.312256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.312478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.312506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.312727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.312775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.312960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.313006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.313194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.313240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.313454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.313482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.313678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.313723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.313926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.313971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.314166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.314212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.314391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.314419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.314633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.314661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.314846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.314891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.315058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.315105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.315297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.315325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.315506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.315559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.315801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.315834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.316005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.316050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.316230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.316258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.316397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.316425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.316646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.316694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.316885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.600 [2024-07-26 14:25:29.316931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.600 qpair failed and we were unable to recover it. 00:31:12.600 [2024-07-26 14:25:29.317148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.317196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.317402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.317437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.317627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.317655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.317834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.317880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.318063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.318109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.318281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.318308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.318513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.318560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.318769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.318814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.318997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.319042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.319231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.319277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.319476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.319522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.319744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.319794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.319958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.320003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.320225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.320253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.320433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.320461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.320680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.320729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.320916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.320962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.321148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.321194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.321399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.321433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.321651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.321700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.321922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.321975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.322186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.322232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.322443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.322471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.322687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.322731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.322942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.322988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.323205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.323251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.323472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.323505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.323721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.323749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.323935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.323981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.601 [2024-07-26 14:25:29.324196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.601 [2024-07-26 14:25:29.324241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.601 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.324415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.324457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.324664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.324692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.324879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.324925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.325145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.325195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.325380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.325408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.325630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.325658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.325889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.325936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.326097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.326142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.326356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.326384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.326621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.326649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.326812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.326858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.327097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.327144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.327359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.327386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.327585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.327614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.327800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.327845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.328012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.328058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.328251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.328297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.328510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.328558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.328755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.328802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.329015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.329061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.329270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.329298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.329453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.329481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.329695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.329738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.329952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.329999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.330149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.330194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.330377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.330405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.330614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.330661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.330839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.330884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.331105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.331151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.331372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.331400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.331597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.331643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.331856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.331905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.602 qpair failed and we were unable to recover it. 00:31:12.602 [2024-07-26 14:25:29.332115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.602 [2024-07-26 14:25:29.332161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.332373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.332401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.332601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.332648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.332829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.332876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.333089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.333134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.333341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.333369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.333557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.333586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.333796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.333841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.334033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.334079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.334293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.334321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.334511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.334557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.334736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.334783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.334979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.335025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.335237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.335283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.335498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.335545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.335711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.335756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.335931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.335978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.336200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.336250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.336435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.336463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.336647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.336694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.336882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.336927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.337139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.337185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.337343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.337370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.337587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.337615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.337756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.337801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.338012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.338058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.338243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.338278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.338464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.338509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.338679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.338725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.338918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.338964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.339118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.339163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.339349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.339377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.339578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.339624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.339854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.603 [2024-07-26 14:25:29.339901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.603 qpair failed and we were unable to recover it. 00:31:12.603 [2024-07-26 14:25:29.340126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.340172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.340357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.340385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.340563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.340610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.340825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.340870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.341090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.341142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.341322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.341350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.341541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.341589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.341787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.341833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.342026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.342072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.342277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.342305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.342487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.342536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.342745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.342791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.342978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.343025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.343183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.343228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.343441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.343469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.343668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.343713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.343966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.344016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.344237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.344283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.344497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.344525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.344708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.344759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.344970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.345021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.345206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.345250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.345459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.345487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.345668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.345714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.345916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.345962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.346155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.346201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.346400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.346445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.346619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.346647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.346834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.346880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.347064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.347110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.347320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.347347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.347529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.347556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.347743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.347789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.348012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.604 [2024-07-26 14:25:29.348064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.604 qpair failed and we were unable to recover it. 00:31:12.604 [2024-07-26 14:25:29.348291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.348318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.348507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.348554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.348737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.348783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.348962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.349008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.349200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.349246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.349454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.349483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.349712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.349760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.350019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.350064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.350279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.350325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.350501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.350529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.350719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.350769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.350998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.351046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.351275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.351321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.351510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.351557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.351728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.351773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.351964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.352009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.352227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.352276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.352501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.352547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.352788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.352835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.353053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.353099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.353311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.353338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.353527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.353573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.353738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.353783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.353955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.354001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.354211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.354258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.354466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.354494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.354727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.354773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.354963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.355009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.355220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.355265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.355449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.355477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.355670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.355911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.355957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.356146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.356192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.356415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.605 [2024-07-26 14:25:29.356466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.605 qpair failed and we were unable to recover it. 00:31:12.605 [2024-07-26 14:25:29.356693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.356720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.356895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.356941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.357135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.357180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.357389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.357416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.357651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.357701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.357912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.357956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.358179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.358230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.358465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.358494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.358706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.358733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.358911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.358956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.359165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.359211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.359433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.359460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.359650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.359677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.359895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.359941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.360114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.360159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.360361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.360388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.360604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.360633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.360816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.360860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.361088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.361134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.361345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.361377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.361568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.361596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.361822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.361871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.362065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.362109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.362294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.362321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.362510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.362538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.362761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.362807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.362992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.363037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.363225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.363270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.363452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.606 [2024-07-26 14:25:29.363480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.606 qpair failed and we were unable to recover it. 00:31:12.606 [2024-07-26 14:25:29.363698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.363746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.363968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.364013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.364239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.364285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.364509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.364537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.364756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.364805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.364998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.365044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.365227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.365271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.365416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.365449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.365658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.365685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.365899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.365945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.366156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.366201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.366454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.366482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.366670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.366697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.366908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.366954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.367140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.367185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.367401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.367436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.367648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.367676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.367856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.367909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.368094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.368139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.368350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.368378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.368602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.368630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.368851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.368900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.369091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.369137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.369285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.369313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.369496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.369543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.369802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.369847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.370083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.370128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.370328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.370356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.370570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.370598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.370798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.370843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.371029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.371073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.371294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.371322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.371493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.371540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.371752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.607 [2024-07-26 14:25:29.371796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.607 qpair failed and we were unable to recover it. 00:31:12.607 [2024-07-26 14:25:29.371974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.372019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.372215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.372260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.372472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.372499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.372717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.372763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.372960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.373006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.373224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.373275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.373487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.373532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.373753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.373802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.374024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.374068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.374251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.374279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.374438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.374489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.374662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.374708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.374933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.374983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.375210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.375255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.375466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.375494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.375682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.375728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.375936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.375982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.376206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.376257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.376462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.376490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.376681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.376726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.376911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.376957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.377169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.377214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.377394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.377421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.377612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.377639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.377863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.377911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.378126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.378172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.378370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.378398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.378599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.378626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.378789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.378834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.379048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.379096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.379341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.379368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.379585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.379614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.379829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.379878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.608 qpair failed and we were unable to recover it. 00:31:12.608 [2024-07-26 14:25:29.380028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.608 [2024-07-26 14:25:29.380072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.380256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.380283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.380467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.380513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.380698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.380744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.380937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.380982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.381169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.381215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.381425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.381459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.381632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.381678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.381902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.381953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.382140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.382185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.382334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.382362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.382511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.382539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.382747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.382792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.383000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.383045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.383242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.383288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.383421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.383456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.383685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.383733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.383928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.383973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.384189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.384239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.384386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.384414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.384643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.384690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.384894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.384939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.385159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.385207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.385433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.385461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.385652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.385679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.385871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.385916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.386126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.386171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.386386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.386413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.386639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.386667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.386850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.386895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.387080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.387125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.387290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.387318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.387530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.387559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.609 [2024-07-26 14:25:29.387769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.609 [2024-07-26 14:25:29.387814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.609 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.388028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.388073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.388276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.388304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.388515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.388560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.388750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.388796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.388992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.389038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.389262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.389313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.389532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.389578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.389797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.389848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.390063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.390109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.390325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.390352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.390562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.390610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.390817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.390867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.391072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.391117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.391310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.391338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.391558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.391606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.391780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.391827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.392014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.392060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.392232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.392260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.392471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.392499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.392692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.392737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.392951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.392999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.393207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.393253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.393437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.393465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.393656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.393683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.393827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.393874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.394057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.394104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.394252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.394280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.394499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.394546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.394735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.394763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.394970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.394998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.395176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.395204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.395410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.395450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.395701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.395746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.610 qpair failed and we were unable to recover it. 00:31:12.610 [2024-07-26 14:25:29.395959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.610 [2024-07-26 14:25:29.396004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.396173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.396219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.396398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.396426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.396663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.396706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.396894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.396938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.397150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.397200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.397410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.397445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.397645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.397690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.397906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.397951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.398132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.398177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.398330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.398358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.398537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.398565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.398725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.398770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.398917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.398962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.399147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.399192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.399391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.399419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.399676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.399726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.399968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.400018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.400268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.400317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.400524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.400556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.400795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.400845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.401085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.401134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.401375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.401407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.401594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.401625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.401825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.401875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.402056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.402105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.402336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.402367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.402621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.402670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.402871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.402922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.403162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.403211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.403418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.403461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.403676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.403707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.403981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.404031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.611 qpair failed and we were unable to recover it. 00:31:12.611 [2024-07-26 14:25:29.404288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.611 [2024-07-26 14:25:29.404337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.404535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.404567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.404772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.404821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.405029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.405079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.405318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.405349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.405580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.405612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.405801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.405849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.406078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.406127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.406353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.406384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.406598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.406647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.406853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.406902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.407078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.407127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.407334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.407365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.407609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.407659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.407910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.407959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.408180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.408228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.408478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.408516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.408767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.408816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.409021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.409070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.409298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.409347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.409580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.409612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.409835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.409885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.410149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.410200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.410370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.410401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.410663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.410712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.410954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.411003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.411218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.411267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.411488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.411526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.411779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.411828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.612 [2024-07-26 14:25:29.412064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.612 [2024-07-26 14:25:29.412113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.612 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.412327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.412359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.412622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.412671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.412878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.412926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.413164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.413213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.413393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.413424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.413680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.413731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.413989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.414043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.414286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.414335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.414544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.414576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.414817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.414867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.415082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.415135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.415344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.415375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.415589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.415639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.415847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.415897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.416154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.416205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.416441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.416472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.416680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.416711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.416957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.417005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.417219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.417267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.417495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.417527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.417737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.417786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.418008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.418056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.418291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.418322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.418552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.418601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.418862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.418912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.419146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.419196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:12.613 [2024-07-26 14:25:29.419412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.419450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:31:12.613 [2024-07-26 14:25:29.419660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.419692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 wit 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:12.613 h addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:12.613 [2024-07-26 14:25:29.419930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.419979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.420187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.420236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.420415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.420464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.613 qpair failed and we were unable to recover it. 00:31:12.613 [2024-07-26 14:25:29.420700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.613 [2024-07-26 14:25:29.420731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.420946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.420995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.421245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.421294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.421534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.421566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.421802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.421856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.422063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.422112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.422331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.422362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.422612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.422663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.422901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.422950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.423146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.423196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.423397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.423436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.423650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.423681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.423894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.423943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.424138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.424187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.424455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.424487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.424681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.424712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.424963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.425014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.425221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.425272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.425513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.425545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.425760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.425809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.426045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.426094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.426303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.426334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.426546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.426596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.426827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.426877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.427092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.427142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.427380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.427411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.427601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.427650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.427836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.427886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.428088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.428137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.428367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.428398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.428589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.428639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.428822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.428876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.429087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.429136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.614 [2024-07-26 14:25:29.429362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.614 [2024-07-26 14:25:29.429394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.614 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.429613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.429662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.429870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.429919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.430134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.430183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.430406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.430446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.430691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.430741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.430954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.431003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.431245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.431293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.431512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.431544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.431711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.431760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.431939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.431988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.432172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.432223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.432425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.432471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.432645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.432700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.432898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.432930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.433126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.433157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.433366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.433397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.433585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.433636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.433842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.433891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.434108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.434157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.434338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.434369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.434572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.434623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.434867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.434915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.435099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.435149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.435357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.435388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.435570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.435626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.435879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.435928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.436155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.436204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.436419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.436458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.436645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.436695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.436897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.436945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.437154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.437202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.437401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.437441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.437630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.437689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.615 [2024-07-26 14:25:29.437869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.615 [2024-07-26 14:25:29.437919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.615 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.438148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.438198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.438445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.438488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.438673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.438727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.438933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.438982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.439201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.439250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.439458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.439490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.439701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.439732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.439979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.440027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.440268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.440318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.440543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.440575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.440769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.440819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.441020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.441069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.441238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.441269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.441502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.441552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.441772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.441821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.442030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.442080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.442243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.442274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.442498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.442530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.442743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.442774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.616 [2024-07-26 14:25:29.442972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.443022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 wit 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:12.616 h addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.616 [2024-07-26 14:25:29.443235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.443267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.616 [2024-07-26 14:25:29.443502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.443550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.443701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.443731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.443903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.443935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.444116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.444145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.444362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.444395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.444589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.444618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.444803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.444832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.445007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.445036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f227c000b90 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.445268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.616 [2024-07-26 14:25:29.445317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.616 qpair failed and we were unable to recover it. 00:31:12.616 [2024-07-26 14:25:29.445523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.445555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.445761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.445792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.446016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.446064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.446264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.446312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.446558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.446590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.446804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.446853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.447065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.447112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.447342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.447373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.447579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.447610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.447847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.447896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.448098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.448147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.448320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.448352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.448546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.448597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.617 [2024-07-26 14:25:29.448866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.617 [2024-07-26 14:25:29.448921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.617 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.449166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.449215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.449441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.449473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.449641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.449672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.449893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.449942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.450182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.450232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.450495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.450527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.450778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.450827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.451041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.451089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.451322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.451371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.451588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.451619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.451859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.451909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.452144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.452193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.452374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.452410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.452615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.452647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.452862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.452911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.453154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.453203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.453400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.453454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.453658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.453695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.453932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.880 [2024-07-26 14:25:29.453981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.880 qpair failed and we were unable to recover it. 00:31:12.880 [2024-07-26 14:25:29.454195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.454244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.454444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.454483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.454648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.454680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.454893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.454942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.455155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.455204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.455416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.455454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.455622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.455652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.455898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.455947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.456165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.456213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.456422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.456464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.456650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.456688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.456908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.456956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.457198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.457246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.457493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.457525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.457763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.457793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.457996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.458044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.458222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.458271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.458521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.458553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.458772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.458821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.459042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.459090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.459363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.459399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.459645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.459703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.459886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.459935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.460144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.460193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.460424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.460471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.460705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.460736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.460986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.461035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.461297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.461349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.461580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.461611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.461851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.461897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.462137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.462186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.462394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.462425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.462659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.462693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.881 [2024-07-26 14:25:29.462899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.881 [2024-07-26 14:25:29.462948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.881 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.463202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.463250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.463501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.463533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.463781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.463830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.464037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.464086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.464327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.464376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.464591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.464622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.464877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.464926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.465143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.465192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.465424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.465463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.465625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.465666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.465871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.465920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.466098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.466147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.466382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.466413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.466648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.466693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.466935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.466984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.467223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.467272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.467517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.467548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.467779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.467828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.468035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.468083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.468260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.468291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.468532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.468581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.468843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.468891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.469093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.469142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.469385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.469416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.469658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.469712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.469929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.469977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.470228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.470275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.470481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.470519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 Malloc0 00:31:12.882 [2024-07-26 14:25:29.470792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.470842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 [2024-07-26 14:25:29.471095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.471145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.882 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:12.882 [2024-07-26 14:25:29.471395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 [2024-07-26 14:25:29.471426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.882 qpair failed and we were unable to recover it. 00:31:12.882 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.882 [2024-07-26 14:25:29.471651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.882 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.882 [2024-07-26 14:25:29.471702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.471937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.471986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.472193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.472242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.472485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.472516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.472764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.472812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.473030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.473079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.473285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.473334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.473565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.473596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.473835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.473884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.474127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.474176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.474339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.883 [2024-07-26 14:25:29.474377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.474414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.474637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.474688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.474904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.474953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.475169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.475218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.475444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.475477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.475681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.475712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.475923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.475972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.476181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.476229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.476441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.476472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.476672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.476703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.476961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.477010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.477224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.477273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.477505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.477537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.477737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.477785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.478021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.478070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.478272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.478304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.478507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.478556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.478809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.478858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.479070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.479118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.479324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.479355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.479594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.479645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.479883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.479932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.480131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.480180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.883 [2024-07-26 14:25:29.480413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.883 [2024-07-26 14:25:29.480452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.883 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.480687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.480735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.480979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.481028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.481278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.481326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.481566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.481598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.481809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.481856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.482092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.482140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.482345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.482375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.884 [2024-07-26 14:25:29.482609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.482640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:12.884 [2024-07-26 14:25:29.482877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.482926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.884 [2024-07-26 14:25:29.483145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.483195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.483442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.483474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.483652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.483683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.483920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.483975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.484216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.484265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.484495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.484527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.484762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.484815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.485053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.485102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.485305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.485336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.485578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.485628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.485867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.485921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.486105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.486153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.486356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.486387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.486599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.486647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.486851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.486900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.487132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.487180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.487409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.487445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.487664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.487715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.487954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.488003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.488212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.488261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.488473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.488522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.488762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.488810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.884 [2024-07-26 14:25:29.489047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.884 [2024-07-26 14:25:29.489095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.884 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.489326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.489357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.489605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.489654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.489877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.489926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.490141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.490190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.490398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.490437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.490633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.885 [2024-07-26 14:25:29.490682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:12.885 [2024-07-26 14:25:29.490913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.490968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.885 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 [2024-07-26 14:25:29.491217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.491267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.491519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.491574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.491797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.491846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.492087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.492136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.492365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.492397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.492638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.492689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.492923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.492972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.493186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.493235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.493458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.493490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.493696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.493727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.493941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.493991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.494198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.494246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.494516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.494548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.494761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.494809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.495055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.495104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.495343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.495375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.495626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.495681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.495861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.495910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.885 [2024-07-26 14:25:29.496150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.885 [2024-07-26 14:25:29.496199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.885 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.496436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.496468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.496712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.496743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.496954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.497004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.497246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.497295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.497535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.497567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.497776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.497825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.498054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.498102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.498337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.498368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.498577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.498609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.886 [2024-07-26 14:25:29.498818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.498867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.886 [2024-07-26 14:25:29.499111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.886 [2024-07-26 14:25:29.499161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.499393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.499424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.499703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.499753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.499961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.500010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.500224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.500272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.500485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.500523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.500749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.500797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.501014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.501064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.501311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.501343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.501560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.501610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.501820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.501869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.502111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.502160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.502398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.886 [2024-07-26 14:25:29.502434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c78ea0 with addr=10.0.0.2, port=4420 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.502655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.886 [2024-07-26 14:25:29.505099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.886 [2024-07-26 14:25:29.505270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.886 [2024-07-26 14:25:29.505302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.886 [2024-07-26 14:25:29.505329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.886 [2024-07-26 14:25:29.505356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.886 [2024-07-26 14:25:29.505411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.886 14:25:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2644355 00:31:12.886 [2024-07-26 14:25:29.514974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.886 [2024-07-26 14:25:29.515119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.886 [2024-07-26 14:25:29.515151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.886 [2024-07-26 14:25:29.515177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.886 [2024-07-26 14:25:29.515203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.886 [2024-07-26 14:25:29.515256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.886 qpair failed and we were unable to recover it. 00:31:12.886 [2024-07-26 14:25:29.525022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.525173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.525204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.525229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.525255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.525307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.534991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.535141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.535181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.535207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.535233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.535285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.545056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.545207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.545237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.545262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.545287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.545340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.555054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.555197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.555227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.555252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.555278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.555330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.565026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.565157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.565193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.565221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.565248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.565297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.575055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.575203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.575234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.575258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.575285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.575336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.585050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.585219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.585248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.585274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.585300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.585348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.595101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.595240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.595270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.595295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.595322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.595372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.605158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.605351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.605380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.605406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.605440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.605499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.615148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.615301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.615330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.615354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.615380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.615437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.625231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.625377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.625407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.625441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.625467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.625515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.635241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.887 [2024-07-26 14:25:29.635384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.887 [2024-07-26 14:25:29.635414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.887 [2024-07-26 14:25:29.635448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.887 [2024-07-26 14:25:29.635476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.887 [2024-07-26 14:25:29.635524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.887 qpair failed and we were unable to recover it. 00:31:12.887 [2024-07-26 14:25:29.645298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.645448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.645490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.645516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.645544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.645593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.655272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.655417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.655460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.655488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.655515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.655564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.665312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.665492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.665522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.665547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.665573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.665621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.675362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.675521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.675551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.675576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.675603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.675653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.685376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.685523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.685553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.685578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.685605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.685657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.695385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.695550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.695581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.695606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.695633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.695690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.705417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.705574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.705604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.705631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.705658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.705707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.715473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.715616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.715646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.715672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.715698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.715747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.725494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.725634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.725664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.725691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.725717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.725765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.735510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.735665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.735694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.735720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.735746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.735799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.745559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.745707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.745742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.745769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.745796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.745846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:12.888 [2024-07-26 14:25:29.755588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.888 [2024-07-26 14:25:29.755728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.888 [2024-07-26 14:25:29.755758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.888 [2024-07-26 14:25:29.755784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.888 [2024-07-26 14:25:29.755810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:12.888 [2024-07-26 14:25:29.755858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:12.888 qpair failed and we were unable to recover it. 00:31:13.149 [2024-07-26 14:25:29.765628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.149 [2024-07-26 14:25:29.765769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.149 [2024-07-26 14:25:29.765800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.149 [2024-07-26 14:25:29.765826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.149 [2024-07-26 14:25:29.765853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.149 [2024-07-26 14:25:29.765904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.149 qpair failed and we were unable to recover it. 00:31:13.149 [2024-07-26 14:25:29.775622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.149 [2024-07-26 14:25:29.775792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.149 [2024-07-26 14:25:29.775822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.149 [2024-07-26 14:25:29.775848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.149 [2024-07-26 14:25:29.775876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.149 [2024-07-26 14:25:29.775925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.149 qpair failed and we were unable to recover it. 00:31:13.149 [2024-07-26 14:25:29.785696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.149 [2024-07-26 14:25:29.785856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.785886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.785911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.785945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.785995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.795756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.795905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.795935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.795961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.795987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.796036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.805747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.805888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.805918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.805944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.805970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.806021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.815781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.815921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.815951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.815976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.816001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.816053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.825761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.825905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.825936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.825961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.825988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.826039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.835822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.835971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.836002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.836027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.836055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.836106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.845825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.845959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.845989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.846014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.846041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.846091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.855856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.855999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.856028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.856055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.856083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.856136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.865876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.866022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.866052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.866078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.866106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.866158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.875928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.876073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.876103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.876129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.876164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.876215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.885968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.886104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.886134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.886160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.886185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.886236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.895973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.150 [2024-07-26 14:25:29.896128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.150 [2024-07-26 14:25:29.896157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.150 [2024-07-26 14:25:29.896183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.150 [2024-07-26 14:25:29.896209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.150 [2024-07-26 14:25:29.896260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.150 qpair failed and we were unable to recover it. 00:31:13.150 [2024-07-26 14:25:29.906023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.906165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.906195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.906220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.906247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.906298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.916083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.916273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.916303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.916329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.916355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.916404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.926073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.926216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.926246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.926272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.926299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.926350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.936119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.936264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.936293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.936319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.936342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.936386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.946133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.946274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.946305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.946330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.946359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.946412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.956156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.956297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.956328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.956354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.956380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.956436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.966174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.966308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.966339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.966365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.966400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.966457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.976245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.976384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.976414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.976455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.976483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.976533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.986248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.986410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.986447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.986473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.986499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.986549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:29.996269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:29.996409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:29.996447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:29.996475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:29.996501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:29.996550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:30.006307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:30.006455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:30.006500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:30.006527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:30.006550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:30.006599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:30.016359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.151 [2024-07-26 14:25:30.016547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.151 [2024-07-26 14:25:30.016578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.151 [2024-07-26 14:25:30.016605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.151 [2024-07-26 14:25:30.016633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.151 [2024-07-26 14:25:30.016682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.151 qpair failed and we were unable to recover it. 00:31:13.151 [2024-07-26 14:25:30.026396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.152 [2024-07-26 14:25:30.026546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.152 [2024-07-26 14:25:30.026578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.152 [2024-07-26 14:25:30.026603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.152 [2024-07-26 14:25:30.026629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.152 [2024-07-26 14:25:30.026679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.152 qpair failed and we were unable to recover it. 00:31:13.411 [2024-07-26 14:25:30.036464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.411 [2024-07-26 14:25:30.036629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.411 [2024-07-26 14:25:30.036663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.411 [2024-07-26 14:25:30.036690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.411 [2024-07-26 14:25:30.036718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.411 [2024-07-26 14:25:30.036773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.411 qpair failed and we were unable to recover it. 00:31:13.411 [2024-07-26 14:25:30.046500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.411 [2024-07-26 14:25:30.046676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.411 [2024-07-26 14:25:30.046707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.411 [2024-07-26 14:25:30.046733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.411 [2024-07-26 14:25:30.046761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.411 [2024-07-26 14:25:30.046812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.411 qpair failed and we were unable to recover it. 00:31:13.411 [2024-07-26 14:25:30.056500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.411 [2024-07-26 14:25:30.056659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.411 [2024-07-26 14:25:30.056689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.411 [2024-07-26 14:25:30.056730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.411 [2024-07-26 14:25:30.056757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.411 [2024-07-26 14:25:30.056807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.411 qpair failed and we were unable to recover it. 00:31:13.411 [2024-07-26 14:25:30.066501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.411 [2024-07-26 14:25:30.066639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.411 [2024-07-26 14:25:30.066669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.411 [2024-07-26 14:25:30.066695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.066722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.066771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.076514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.076653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.076683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.076708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.076734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.076786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.086508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.086652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.086683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.086709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.086736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.086785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.096569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.096723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.096752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.096779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.096806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.096855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.106578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.106723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.106753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.106778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.106805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.106854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.116619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.116759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.116789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.116815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.116842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.116891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.126681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.126852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.126882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.126907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.126934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.126985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.136694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.136840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.136869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.136894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.136921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.136972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.146710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.146861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.146891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.146929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.146957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.147005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.156704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.156846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.156876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.156902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.156928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.156976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.166763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.166936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.166965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.166990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.167017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.167066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.176788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.176931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.176961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.176987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.177014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.412 [2024-07-26 14:25:30.177062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.412 qpair failed and we were unable to recover it. 00:31:13.412 [2024-07-26 14:25:30.186804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.412 [2024-07-26 14:25:30.186979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.412 [2024-07-26 14:25:30.187009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.412 [2024-07-26 14:25:30.187034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.412 [2024-07-26 14:25:30.187061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.187111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.196848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.196982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.197012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.197038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.197063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.197111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.206870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.207014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.207043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.207068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.207094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.207144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.216948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.217097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.217127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.217153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.217180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.217228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.226952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.227096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.227127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.227153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.227179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.227228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.236968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.237114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.237149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.237177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.237204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.237253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.247037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.247182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.247212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.247238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.247265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.247314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.257020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.257164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.257194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.257220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.257245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.257298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.267056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.267205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.267236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.267262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.267290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.267340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.277305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.277466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.277498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.277526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.277554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.277607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.413 [2024-07-26 14:25:30.287160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.413 [2024-07-26 14:25:30.287304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.413 [2024-07-26 14:25:30.287334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.413 [2024-07-26 14:25:30.287359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.413 [2024-07-26 14:25:30.287384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.413 [2024-07-26 14:25:30.287442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.413 qpair failed and we were unable to recover it. 00:31:13.673 [2024-07-26 14:25:30.297202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.673 [2024-07-26 14:25:30.297353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.673 [2024-07-26 14:25:30.297382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.673 [2024-07-26 14:25:30.297408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.673 [2024-07-26 14:25:30.297444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.673 [2024-07-26 14:25:30.297495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-07-26 14:25:30.307229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.673 [2024-07-26 14:25:30.307364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.673 [2024-07-26 14:25:30.307393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.673 [2024-07-26 14:25:30.307418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.673 [2024-07-26 14:25:30.307453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.673 [2024-07-26 14:25:30.307505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-07-26 14:25:30.317190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.673 [2024-07-26 14:25:30.317335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.673 [2024-07-26 14:25:30.317365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.673 [2024-07-26 14:25:30.317390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.673 [2024-07-26 14:25:30.317416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.673 [2024-07-26 14:25:30.317505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.673 qpair failed and we were unable to recover it. 00:31:13.673 [2024-07-26 14:25:30.327217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.673 [2024-07-26 14:25:30.327365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.673 [2024-07-26 14:25:30.327400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.327437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.327467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.327517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.337270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.337417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.337455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.337481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.337509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.337558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.347297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.347457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.347497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.347523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.347550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.347599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.357328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.357486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.357516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.357542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.357568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.357619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.367346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.367518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.367548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.367573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.367599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.367655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.377438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.377618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.377647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.377672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.377698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.377748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.387389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.387567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.387597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.387621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.387648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.387700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.397465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.397611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.397641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.397667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.397693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.397742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.407504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.407646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.407675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.407701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.407727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.407778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.417525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.417691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.417726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.417753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.417780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.417829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.427529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.427673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.427702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.427729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.427754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.427803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.437591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.437816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.437845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.674 [2024-07-26 14:25:30.437872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.674 [2024-07-26 14:25:30.437899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.674 [2024-07-26 14:25:30.437950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.674 qpair failed and we were unable to recover it. 00:31:13.674 [2024-07-26 14:25:30.447602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.674 [2024-07-26 14:25:30.447756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.674 [2024-07-26 14:25:30.447793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.447820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.447848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.447897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.457689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.457836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.457875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.457901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.457927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.457985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.467661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.467801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.467831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.467856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.467882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.467929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.477696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.477892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.477921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.477947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.477974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.478023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.487738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.487944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.487973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.487999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.488026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.488076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.497763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.497907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.497937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.497962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.497987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.498039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.507753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.507924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.507960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.507986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.508013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.508062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.517885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.518029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.518058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.518083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.518109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.518162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.527808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.527984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.528014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.528039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.528065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.528114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.537862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.538008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.538037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.538063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.538089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.538139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.547890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.548036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.548067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.548092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.548126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.548176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.675 [2024-07-26 14:25:30.557925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.675 [2024-07-26 14:25:30.558062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.675 [2024-07-26 14:25:30.558091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.675 [2024-07-26 14:25:30.558117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.675 [2024-07-26 14:25:30.558143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.675 [2024-07-26 14:25:30.558193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.675 qpair failed and we were unable to recover it. 00:31:13.935 [2024-07-26 14:25:30.567935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.935 [2024-07-26 14:25:30.568079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.935 [2024-07-26 14:25:30.568109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.935 [2024-07-26 14:25:30.568134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.935 [2024-07-26 14:25:30.568161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.935 [2024-07-26 14:25:30.568213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.935 qpair failed and we were unable to recover it. 00:31:13.935 [2024-07-26 14:25:30.577984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.935 [2024-07-26 14:25:30.578128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.935 [2024-07-26 14:25:30.578158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.935 [2024-07-26 14:25:30.578184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.935 [2024-07-26 14:25:30.578210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.935 [2024-07-26 14:25:30.578261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.935 qpair failed and we were unable to recover it. 00:31:13.935 [2024-07-26 14:25:30.588032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.935 [2024-07-26 14:25:30.588199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.935 [2024-07-26 14:25:30.588229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.935 [2024-07-26 14:25:30.588254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.935 [2024-07-26 14:25:30.588280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.935 [2024-07-26 14:25:30.588328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.935 qpair failed and we were unable to recover it. 00:31:13.935 [2024-07-26 14:25:30.598005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.935 [2024-07-26 14:25:30.598145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.935 [2024-07-26 14:25:30.598175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.935 [2024-07-26 14:25:30.598201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.935 [2024-07-26 14:25:30.598228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.935 [2024-07-26 14:25:30.598276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.935 qpair failed and we were unable to recover it. 00:31:13.935 [2024-07-26 14:25:30.608052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.935 [2024-07-26 14:25:30.608192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.935 [2024-07-26 14:25:30.608221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.935 [2024-07-26 14:25:30.608246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.935 [2024-07-26 14:25:30.608272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.935 [2024-07-26 14:25:30.608323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.935 qpair failed and we were unable to recover it. 00:31:13.935 [2024-07-26 14:25:30.618091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.935 [2024-07-26 14:25:30.618263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.618292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.618317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.618343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.618392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.628126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.628299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.628328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.628354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.628381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.628440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.638150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.638288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.638318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.638343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.638377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.638435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.648176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.648321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.648350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.648375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.648401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.648465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.658218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.658367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.658395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.658421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.658455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.658506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.668224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.668401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.668438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.668466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.668492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.668541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.678262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.678473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.678502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.678528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.678554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.678604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.688276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.688422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.688461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.688487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.688514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.688564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.698331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.698485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.698514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.698540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.698567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.698620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.708327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.708480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.708510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.708535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.708562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.708612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.718414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.718596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.718626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.718651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.718678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.718731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.728408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.728562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.936 [2024-07-26 14:25:30.728593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.936 [2024-07-26 14:25:30.728619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.936 [2024-07-26 14:25:30.728653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.936 [2024-07-26 14:25:30.728704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.936 qpair failed and we were unable to recover it. 00:31:13.936 [2024-07-26 14:25:30.738412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.936 [2024-07-26 14:25:30.738575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.738604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.738629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.738654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.738706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:13.937 [2024-07-26 14:25:30.748446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.937 [2024-07-26 14:25:30.748589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.748619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.748645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.748671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.748720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:13.937 [2024-07-26 14:25:30.758451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.937 [2024-07-26 14:25:30.758591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.758621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.758646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.758671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.758723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:13.937 [2024-07-26 14:25:30.768485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.937 [2024-07-26 14:25:30.768620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.768650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.768675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.768701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.768753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:13.937 [2024-07-26 14:25:30.778563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.937 [2024-07-26 14:25:30.778729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.778758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.778783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.778809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.778860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:13.937 [2024-07-26 14:25:30.788540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.937 [2024-07-26 14:25:30.788684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.788714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.788739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.788766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.788817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:13.937 [2024-07-26 14:25:30.798594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.937 [2024-07-26 14:25:30.798731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.798761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.798786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.798813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.798864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:13.937 [2024-07-26 14:25:30.808592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.937 [2024-07-26 14:25:30.808728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.808757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.808782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.808810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.808860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:13.937 [2024-07-26 14:25:30.818670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.937 [2024-07-26 14:25:30.818818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.937 [2024-07-26 14:25:30.818847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.937 [2024-07-26 14:25:30.818880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.937 [2024-07-26 14:25:30.818907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:13.937 [2024-07-26 14:25:30.818956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:13.937 qpair failed and we were unable to recover it. 00:31:14.196 [2024-07-26 14:25:30.828651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.196 [2024-07-26 14:25:30.828792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.196 [2024-07-26 14:25:30.828821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.828848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.828874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.828926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.838702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.838888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.838917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.838944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.838970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.839022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.848749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.848885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.848914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.848940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.848966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.849015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.858735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.858878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.858908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.858934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.858960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.859009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.868772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.868912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.868941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.868967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.868993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.869041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.878782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.878920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.878949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.878975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.879001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.879050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.888885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.889068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.889098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.889123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.889149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.889196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.898862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.899064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.899094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.899120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.899146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.899195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.908880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.909046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.909075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.909113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.909142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.909191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.918944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.919094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.919124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.919150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.919176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.919227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.928937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.929096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.929126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.929153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.929180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.929233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.939015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.939160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.197 [2024-07-26 14:25:30.939190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.197 [2024-07-26 14:25:30.939216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.197 [2024-07-26 14:25:30.939242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.197 [2024-07-26 14:25:30.939290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.197 qpair failed and we were unable to recover it. 00:31:14.197 [2024-07-26 14:25:30.949042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.197 [2024-07-26 14:25:30.949204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.198 [2024-07-26 14:25:30.949235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.198 [2024-07-26 14:25:30.949262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.198 [2024-07-26 14:25:30.949289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.198 [2024-07-26 14:25:30.949340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-26 14:25:30.959010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.198 [2024-07-26 14:25:30.959190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.198 [2024-07-26 14:25:30.959220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.198 [2024-07-26 14:25:30.959247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.198 [2024-07-26 14:25:30.959274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.198 [2024-07-26 14:25:30.959324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-26 14:25:30.969044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.198 [2024-07-26 14:25:30.969181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.198 [2024-07-26 14:25:30.969211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.198 [2024-07-26 14:25:30.969237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.198 [2024-07-26 14:25:30.969263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.198 [2024-07-26 14:25:30.969313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-26 14:25:30.979121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.198 [2024-07-26 14:25:30.979263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.198 [2024-07-26 14:25:30.979293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.198 [2024-07-26 14:25:30.979318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.198 [2024-07-26 14:25:30.979345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c78ea0 00:31:14.198 [2024-07-26 14:25:30.979397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.198 qpair failed and we were unable to recover it. 00:31:14.198 [2024-07-26 14:25:30.979473] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:31:14.198 A controller has encountered a failure and is being reset. 00:31:14.198 [2024-07-26 14:25:30.979546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c75b00 (9): Bad file descriptor 00:31:14.198 Controller properly reset. 00:31:18.382 Initializing NVMe Controllers 00:31:18.382 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:18.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:18.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:18.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:18.382 Initialization complete. Launching workers. 00:31:18.382 Starting thread on core 1 00:31:18.382 Starting thread on core 2 00:31:18.382 Starting thread on core 3 00:31:18.382 Starting thread on core 0 00:31:18.382 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:18.382 00:31:18.382 real 0m11.456s 00:31:18.382 user 0m32.300s 00:31:18.382 sys 0m6.912s 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.383 ************************************ 00:31:18.383 END TEST nvmf_target_disconnect_tc2 00:31:18.383 ************************************ 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:18.383 rmmod nvme_tcp 00:31:18.383 rmmod nvme_fabrics 00:31:18.383 rmmod nvme_keyring 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2644765 ']' 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2644765 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2644765 ']' 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2644765 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2644765 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2644765' 00:31:18.383 killing process with pid 2644765 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2644765 00:31:18.383 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2644765 00:31:18.952 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:18.952 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:18.952 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:18.952 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:18.952 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:18.952 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.952 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.952 14:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.855 14:25:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:20.855 00:31:20.855 real 0m17.101s 00:31:20.855 user 0m57.588s 00:31:20.855 sys 0m9.825s 00:31:20.855 14:25:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.855 14:25:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:20.855 ************************************ 00:31:20.855 END TEST nvmf_target_disconnect 00:31:20.855 ************************************ 00:31:20.855 14:25:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:20.855 00:31:20.855 real 5m46.202s 00:31:20.855 user 12m35.720s 00:31:20.855 sys 1m29.498s 00:31:20.855 14:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.855 14:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.855 ************************************ 00:31:20.855 END TEST nvmf_host 00:31:20.855 ************************************ 00:31:21.113 00:31:21.113 real 22m18.239s 00:31:21.113 user 53m11.240s 00:31:21.113 sys 5m46.417s 00:31:21.113 14:25:37 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:21.113 14:25:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.113 ************************************ 00:31:21.113 END TEST nvmf_tcp 00:31:21.113 ************************************ 00:31:21.113 14:25:37 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:31:21.113 14:25:37 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:21.113 14:25:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:21.113 14:25:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:21.113 14:25:37 -- common/autotest_common.sh@10 -- # set +x 00:31:21.113 ************************************ 00:31:21.113 START TEST spdkcli_nvmf_tcp 00:31:21.113 ************************************ 00:31:21.113 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:21.113 * Looking for test storage... 00:31:21.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:21.113 14:25:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:21.113 14:25:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2645963 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2645963 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2645963 ']' 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:21.114 14:25:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.114 [2024-07-26 14:25:37.990060] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:31:21.114 [2024-07-26 14:25:37.990211] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2645963 ] 00:31:21.371 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.371 [2024-07-26 14:25:38.072844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:21.371 [2024-07-26 14:25:38.195192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.371 [2024-07-26 14:25:38.195199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.629 14:25:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:21.629 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:21.629 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:21.629 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:21.629 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:21.629 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:21.629 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:21.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:21.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:21.629 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:21.629 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:21.629 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:21.629 ' 00:31:24.157 [2024-07-26 14:25:40.970271] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.528 [2024-07-26 14:25:42.226782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:28.054 [2024-07-26 14:25:44.558057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:29.957 [2024-07-26 14:25:46.528089] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:31.333 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:31.333 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:31.333 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:31.333 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:31.333 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:31.333 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:31.333 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:31.333 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:31.333 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:31.333 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:31.333 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:31.333 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:31.333 14:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:31.333 14:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:31.333 14:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.333 14:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:31.333 14:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:31.333 14:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.333 14:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:31.333 14:25:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:31.912 14:25:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:32.170 14:25:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:32.170 14:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:32.170 14:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:32.170 14:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.170 14:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:32.170 14:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:32.170 14:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.170 14:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:32.170 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:32.170 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:32.170 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:32.170 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:32.170 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:32.170 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:32.170 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:32.170 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:32.170 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:32.170 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:32.170 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:32.170 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:32.170 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:32.170 ' 00:31:37.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:37.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:37.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:37.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:37.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:37.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:37.436 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:37.436 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:37.436 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:37.436 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:37.436 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:37.436 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:37.436 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:37.436 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2645963 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2645963 ']' 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2645963 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2645963 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2645963' 00:31:37.436 killing process with pid 2645963 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2645963 00:31:37.436 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2645963 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2645963 ']' 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2645963 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2645963 ']' 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2645963 00:31:37.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2645963) - No such process 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2645963 is not found' 00:31:37.696 Process with pid 2645963 is not found 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:37.696 00:31:37.696 real 0m16.686s 00:31:37.696 user 0m35.542s 00:31:37.696 sys 0m0.929s 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:37.696 14:25:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.696 ************************************ 00:31:37.696 END TEST spdkcli_nvmf_tcp 00:31:37.696 ************************************ 00:31:37.696 14:25:54 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:37.696 14:25:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:37.696 14:25:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:37.696 14:25:54 -- common/autotest_common.sh@10 -- # set +x 00:31:37.696 ************************************ 00:31:37.696 START TEST nvmf_identify_passthru 00:31:37.696 ************************************ 00:31:37.696 14:25:54 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:37.955 * Looking for test storage... 00:31:37.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:37.955 14:25:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.955 14:25:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.955 14:25:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.955 14:25:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:37.955 14:25:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.955 14:25:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.955 14:25:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.955 14:25:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:37.955 14:25:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.955 14:25:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.955 14:25:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:37.955 14:25:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:37.955 14:25:54 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:37.955 14:25:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:40.491 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:40.491 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:40.491 Found net devices under 0000:84:00.0: cvl_0_0 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.491 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:40.492 Found net devices under 0000:84:00.1: cvl_0_1 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:40.492 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.750 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.750 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.750 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:40.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:31:40.750 00:31:40.750 --- 10.0.0.2 ping statistics --- 00:31:40.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.750 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:31:40.750 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:31:40.750 00:31:40.750 --- 10.0.0.1 ping statistics --- 00:31:40.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.751 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:40.751 14:25:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:40.751 14:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:40.751 14:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:31:40.751 14:25:57 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:31:40.751 14:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:31:40.751 14:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:31:40.751 14:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:31:40.751 14:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:40.751 14:25:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:40.751 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.966 14:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:31:44.966 14:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:31:44.966 14:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:44.966 14:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:44.966 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.152 14:26:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:49.152 14:26:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:49.152 14:26:05 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:49.152 14:26:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.152 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:49.152 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:49.152 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.152 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2650599 00:31:49.152 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:49.152 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:49.152 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2650599 00:31:49.152 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2650599 ']' 00:31:49.152 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.152 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:49.152 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.152 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:49.152 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.411 [2024-07-26 14:26:06.127741] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:31:49.411 [2024-07-26 14:26:06.127900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.411 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.411 [2024-07-26 14:26:06.240532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:49.669 [2024-07-26 14:26:06.365319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.669 [2024-07-26 14:26:06.365384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.669 [2024-07-26 14:26:06.365401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.669 [2024-07-26 14:26:06.365414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.669 [2024-07-26 14:26:06.365426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.669 [2024-07-26 14:26:06.365506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.669 [2024-07-26 14:26:06.365575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.669 [2024-07-26 14:26:06.365599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.669 [2024-07-26 14:26:06.365603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.669 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:49.669 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:31:49.669 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:49.669 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.669 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.669 INFO: Log level set to 20 00:31:49.669 INFO: Requests: 00:31:49.669 { 00:31:49.669 "jsonrpc": "2.0", 00:31:49.669 "method": "nvmf_set_config", 00:31:49.669 "id": 1, 00:31:49.669 "params": { 00:31:49.669 "admin_cmd_passthru": { 00:31:49.669 "identify_ctrlr": true 00:31:49.669 } 00:31:49.669 } 00:31:49.669 } 00:31:49.669 00:31:49.669 INFO: response: 00:31:49.669 { 00:31:49.669 "jsonrpc": "2.0", 00:31:49.669 "id": 1, 00:31:49.669 "result": true 00:31:49.669 } 00:31:49.669 00:31:49.669 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.669 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:49.669 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.669 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.669 INFO: Setting log level to 20 00:31:49.669 INFO: Setting log level to 20 00:31:49.669 INFO: Log level set to 20 00:31:49.669 INFO: Log level set to 20 00:31:49.669 INFO: Requests: 00:31:49.669 { 00:31:49.669 "jsonrpc": "2.0", 00:31:49.669 "method": "framework_start_init", 00:31:49.669 "id": 1 00:31:49.669 } 00:31:49.669 00:31:49.669 INFO: Requests: 00:31:49.669 { 00:31:49.669 "jsonrpc": "2.0", 00:31:49.669 "method": "framework_start_init", 00:31:49.669 "id": 1 00:31:49.669 } 00:31:49.669 00:31:49.669 [2024-07-26 14:26:06.536925] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:49.669 INFO: response: 00:31:49.669 { 00:31:49.670 "jsonrpc": "2.0", 00:31:49.670 "id": 1, 00:31:49.670 "result": true 00:31:49.670 } 00:31:49.670 00:31:49.670 INFO: response: 00:31:49.670 { 00:31:49.670 "jsonrpc": "2.0", 00:31:49.670 "id": 1, 00:31:49.670 "result": true 00:31:49.670 } 00:31:49.670 00:31:49.670 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.670 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:49.670 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.670 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.670 INFO: Setting log level to 40 00:31:49.670 INFO: Setting log level to 40 00:31:49.670 INFO: Setting log level to 40 00:31:49.670 [2024-07-26 14:26:06.547196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.670 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.670 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:49.670 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:49.928 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:49.928 14:26:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:31:49.928 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.928 14:26:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:53.209 Nvme0n1 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:53.209 [2024-07-26 14:26:09.440871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:53.209 [ 00:31:53.209 { 00:31:53.209 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:53.209 "subtype": "Discovery", 00:31:53.209 "listen_addresses": [], 00:31:53.209 "allow_any_host": true, 00:31:53.209 "hosts": [] 00:31:53.209 }, 00:31:53.209 { 00:31:53.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.209 "subtype": "NVMe", 00:31:53.209 "listen_addresses": [ 00:31:53.209 { 00:31:53.209 "trtype": "TCP", 00:31:53.209 "adrfam": "IPv4", 00:31:53.209 "traddr": "10.0.0.2", 00:31:53.209 "trsvcid": "4420" 00:31:53.209 } 00:31:53.209 ], 00:31:53.209 "allow_any_host": true, 00:31:53.209 "hosts": [], 00:31:53.209 "serial_number": "SPDK00000000000001", 00:31:53.209 "model_number": "SPDK bdev Controller", 00:31:53.209 "max_namespaces": 1, 00:31:53.209 "min_cntlid": 1, 00:31:53.209 "max_cntlid": 65519, 00:31:53.209 "namespaces": [ 00:31:53.209 { 00:31:53.209 "nsid": 1, 00:31:53.209 "bdev_name": "Nvme0n1", 00:31:53.209 "name": "Nvme0n1", 00:31:53.209 "nguid": "DAA2EF98A98D47D4A559B2271003DA2F", 00:31:53.209 "uuid": "daa2ef98-a98d-47d4-a559-b2271003da2f" 00:31:53.209 } 00:31:53.209 ] 00:31:53.209 } 00:31:53.209 ] 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:53.209 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:53.209 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:53.209 14:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:53.209 rmmod nvme_tcp 00:31:53.209 rmmod nvme_fabrics 00:31:53.209 rmmod nvme_keyring 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2650599 ']' 00:31:53.209 14:26:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2650599 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2650599 ']' 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2650599 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2650599 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2650599' 00:31:53.209 killing process with pid 2650599 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2650599 00:31:53.209 14:26:09 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2650599 00:31:54.578 14:26:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:54.578 14:26:11 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:54.578 14:26:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:54.578 14:26:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:54.578 14:26:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:54.578 14:26:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.578 14:26:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:54.578 14:26:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.109 14:26:13 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:57.109 00:31:57.109 real 0m18.936s 00:31:57.109 user 0m26.996s 00:31:57.109 sys 0m3.030s 00:31:57.109 14:26:13 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.109 14:26:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.109 ************************************ 00:31:57.109 END TEST nvmf_identify_passthru 00:31:57.109 ************************************ 00:31:57.109 14:26:13 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:57.109 14:26:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:57.109 14:26:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.109 14:26:13 -- common/autotest_common.sh@10 -- # set +x 00:31:57.109 ************************************ 00:31:57.109 START TEST nvmf_dif 00:31:57.109 ************************************ 00:31:57.109 14:26:13 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:57.109 * Looking for test storage... 00:31:57.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:57.109 14:26:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.109 14:26:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.109 14:26:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.109 14:26:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.109 14:26:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.109 14:26:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.109 14:26:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.109 14:26:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:57.109 14:26:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:57.109 14:26:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:57.109 14:26:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:57.109 14:26:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:57.109 14:26:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:57.109 14:26:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.109 14:26:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:57.109 14:26:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:57.109 14:26:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:57.109 14:26:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:59.640 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.640 14:26:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:59.641 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:59.641 Found net devices under 0000:84:00.0: cvl_0_0 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:59.641 Found net devices under 0000:84:00.1: cvl_0_1 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:59.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:31:59.641 00:31:59.641 --- 10.0.0.2 ping statistics --- 00:31:59.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.641 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:31:59.641 00:31:59.641 --- 10.0.0.1 ping statistics --- 00:31:59.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.641 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:59.641 14:26:16 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:01.019 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:01.019 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:01.019 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:01.019 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:01.019 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:01.019 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:01.019 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:01.019 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:01.019 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:01.019 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:01.019 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:01.019 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:01.019 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:01.019 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:01.019 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:01.019 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:01.019 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:01.019 14:26:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:01.019 14:26:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:01.019 14:26:17 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:01.019 14:26:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2653893 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:01.019 14:26:17 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2653893 00:32:01.019 14:26:17 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2653893 ']' 00:32:01.019 14:26:17 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.020 14:26:17 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:01.020 14:26:17 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.020 14:26:17 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:01.020 14:26:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.278 [2024-07-26 14:26:17.911699] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:32:01.278 [2024-07-26 14:26:17.911857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.278 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.278 [2024-07-26 14:26:18.002270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.278 [2024-07-26 14:26:18.124855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.278 [2024-07-26 14:26:18.124909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.278 [2024-07-26 14:26:18.124927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.278 [2024-07-26 14:26:18.124940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.278 [2024-07-26 14:26:18.124951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.278 [2024-07-26 14:26:18.124980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.536 14:26:18 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:01.536 14:26:18 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:32:01.536 14:26:18 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:01.536 14:26:18 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:01.536 14:26:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.536 14:26:18 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.536 14:26:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:01.537 14:26:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:01.537 14:26:18 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.537 14:26:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.537 [2024-07-26 14:26:18.270407] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.537 14:26:18 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.537 14:26:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:01.537 14:26:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:01.537 14:26:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.537 14:26:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:01.537 ************************************ 00:32:01.537 START TEST fio_dif_1_default 00:32:01.537 ************************************ 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:01.537 bdev_null0 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:01.537 [2024-07-26 14:26:18.326705] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:01.537 { 00:32:01.537 "params": { 00:32:01.537 "name": "Nvme$subsystem", 00:32:01.537 "trtype": "$TEST_TRANSPORT", 00:32:01.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.537 "adrfam": "ipv4", 00:32:01.537 "trsvcid": "$NVMF_PORT", 00:32:01.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.537 "hdgst": ${hdgst:-false}, 00:32:01.537 "ddgst": ${ddgst:-false} 00:32:01.537 }, 00:32:01.537 "method": "bdev_nvme_attach_controller" 00:32:01.537 } 00:32:01.537 EOF 00:32:01.537 )") 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:01.537 "params": { 00:32:01.537 "name": "Nvme0", 00:32:01.537 "trtype": "tcp", 00:32:01.537 "traddr": "10.0.0.2", 00:32:01.537 "adrfam": "ipv4", 00:32:01.537 "trsvcid": "4420", 00:32:01.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.537 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:01.537 "hdgst": false, 00:32:01.537 "ddgst": false 00:32:01.537 }, 00:32:01.537 "method": "bdev_nvme_attach_controller" 00:32:01.537 }' 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:01.537 14:26:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.796 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:01.796 fio-3.35 00:32:01.796 Starting 1 thread 00:32:01.796 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.994 00:32:13.994 filename0: (groupid=0, jobs=1): err= 0: pid=2654216: Fri Jul 26 14:26:29 2024 00:32:13.994 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10036msec) 00:32:13.994 slat (usec): min=6, max=114, avg=10.00, stdev= 4.82 00:32:13.994 clat (usec): min=40884, max=48234, avg=41782.66, stdev=582.30 00:32:13.994 lat (usec): min=40906, max=48278, avg=41792.66, stdev=582.65 00:32:13.994 clat percentiles (usec): 00:32:13.994 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:13.994 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:13.994 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:13.994 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:32:13.994 | 99.99th=[47973] 00:32:13.994 bw ( KiB/s): min= 352, max= 416, per=99.84%, avg=382.40, stdev=12.61, samples=20 00:32:13.994 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:32:13.994 lat (msec) : 50=100.00% 00:32:13.994 cpu : usr=89.83%, sys=9.88%, ctx=19, majf=0, minf=226 00:32:13.994 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.994 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.994 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:13.994 00:32:13.994 Run status group 0 (all jobs): 00:32:13.994 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10036-10036msec 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.994 00:32:13.994 real 0m11.378s 00:32:13.994 user 0m10.288s 00:32:13.994 sys 0m1.314s 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 ************************************ 00:32:13.994 END TEST fio_dif_1_default 00:32:13.994 ************************************ 00:32:13.994 14:26:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:13.994 14:26:29 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:13.994 14:26:29 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 ************************************ 00:32:13.994 START TEST fio_dif_1_multi_subsystems 00:32:13.994 ************************************ 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 bdev_null0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 [2024-07-26 14:26:29.764542] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 bdev_null1 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.994 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:13.995 { 00:32:13.995 "params": { 00:32:13.995 "name": "Nvme$subsystem", 00:32:13.995 "trtype": "$TEST_TRANSPORT", 00:32:13.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.995 "adrfam": "ipv4", 00:32:13.995 "trsvcid": "$NVMF_PORT", 00:32:13.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.995 "hdgst": ${hdgst:-false}, 00:32:13.995 "ddgst": ${ddgst:-false} 00:32:13.995 }, 00:32:13.995 "method": "bdev_nvme_attach_controller" 00:32:13.995 } 00:32:13.995 EOF 00:32:13.995 )") 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:13.995 { 00:32:13.995 "params": { 00:32:13.995 "name": "Nvme$subsystem", 00:32:13.995 "trtype": "$TEST_TRANSPORT", 00:32:13.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.995 "adrfam": "ipv4", 00:32:13.995 "trsvcid": "$NVMF_PORT", 00:32:13.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.995 "hdgst": ${hdgst:-false}, 00:32:13.995 "ddgst": ${ddgst:-false} 00:32:13.995 }, 00:32:13.995 "method": "bdev_nvme_attach_controller" 00:32:13.995 } 00:32:13.995 EOF 00:32:13.995 )") 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:13.995 "params": { 00:32:13.995 "name": "Nvme0", 00:32:13.995 "trtype": "tcp", 00:32:13.995 "traddr": "10.0.0.2", 00:32:13.995 "adrfam": "ipv4", 00:32:13.995 "trsvcid": "4420", 00:32:13.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:13.995 "hdgst": false, 00:32:13.995 "ddgst": false 00:32:13.995 }, 00:32:13.995 "method": "bdev_nvme_attach_controller" 00:32:13.995 },{ 00:32:13.995 "params": { 00:32:13.995 "name": "Nvme1", 00:32:13.995 "trtype": "tcp", 00:32:13.995 "traddr": "10.0.0.2", 00:32:13.995 "adrfam": "ipv4", 00:32:13.995 "trsvcid": "4420", 00:32:13.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.995 "hdgst": false, 00:32:13.995 "ddgst": false 00:32:13.995 }, 00:32:13.995 "method": "bdev_nvme_attach_controller" 00:32:13.995 }' 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:13.995 14:26:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.995 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:13.995 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:13.995 fio-3.35 00:32:13.995 Starting 2 threads 00:32:13.995 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.965 00:32:23.965 filename0: (groupid=0, jobs=1): err= 0: pid=2655646: Fri Jul 26 14:26:40 2024 00:32:23.965 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10014msec) 00:32:23.965 slat (nsec): min=4816, max=50044, avg=10431.17, stdev=2491.69 00:32:23.965 clat (usec): min=830, max=43845, avg=21228.95, stdev=20189.28 00:32:23.966 lat (usec): min=839, max=43859, avg=21239.38, stdev=20189.01 00:32:23.966 clat percentiles (usec): 00:32:23.966 | 1.00th=[ 848], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 922], 00:32:23.966 | 30.00th=[ 955], 40.00th=[ 1172], 50.00th=[41157], 60.00th=[41157], 00:32:23.966 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:32:23.966 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:32:23.966 | 99.99th=[43779] 00:32:23.966 bw ( KiB/s): min= 704, max= 768, per=50.03%, avg=752.00, stdev=28.43, samples=20 00:32:23.966 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:32:23.966 lat (usec) : 1000=34.08% 00:32:23.966 lat (msec) : 2=15.82%, 50=50.11% 00:32:23.966 cpu : usr=94.16%, sys=5.54%, ctx=13, majf=0, minf=29 00:32:23.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.966 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:23.966 filename1: (groupid=0, jobs=1): err= 0: pid=2655647: Fri Jul 26 14:26:40 2024 00:32:23.966 read: IOPS=187, BW=751KiB/s (769kB/s)(7520KiB/10017msec) 00:32:23.966 slat (nsec): min=6056, max=34769, avg=10574.81, stdev=2544.44 00:32:23.966 clat (usec): min=740, max=43812, avg=21279.97, stdev=20259.50 00:32:23.966 lat (usec): min=749, max=43827, avg=21290.55, stdev=20259.39 00:32:23.966 clat percentiles (usec): 00:32:23.966 | 1.00th=[ 799], 5.00th=[ 840], 10.00th=[ 857], 20.00th=[ 898], 00:32:23.966 | 30.00th=[ 955], 40.00th=[ 1012], 50.00th=[41157], 60.00th=[41157], 00:32:23.966 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:32:23.966 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:32:23.966 | 99.99th=[43779] 00:32:23.966 bw ( KiB/s): min= 672, max= 768, per=49.90%, avg=750.40, stdev=31.96, samples=20 00:32:23.966 iops : min= 168, max= 192, avg=187.60, stdev= 7.99, samples=20 00:32:23.966 lat (usec) : 750=0.05%, 1000=38.99% 00:32:23.966 lat (msec) : 2=10.74%, 50=50.21% 00:32:23.966 cpu : usr=94.67%, sys=5.03%, ctx=17, majf=0, minf=171 00:32:23.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.966 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:23.966 00:32:23.966 Run status group 0 (all jobs): 00:32:23.966 READ: bw=1503KiB/s (1539kB/s), 751KiB/s-753KiB/s (769kB/s-771kB/s), io=14.7MiB (15.4MB), run=10014-10017msec 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.224 00:32:24.224 real 0m11.374s 00:32:24.224 user 0m20.396s 00:32:24.224 sys 0m1.353s 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:24.224 14:26:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:24.224 ************************************ 00:32:24.224 END TEST fio_dif_1_multi_subsystems 00:32:24.224 ************************************ 00:32:24.483 14:26:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:24.483 14:26:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:24.483 14:26:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:24.483 14:26:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:24.483 ************************************ 00:32:24.483 START TEST fio_dif_rand_params 00:32:24.483 ************************************ 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:24.483 bdev_null0 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:24.483 [2024-07-26 14:26:41.191419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:24.483 { 00:32:24.483 "params": { 00:32:24.483 "name": "Nvme$subsystem", 00:32:24.483 "trtype": "$TEST_TRANSPORT", 00:32:24.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:24.483 "adrfam": "ipv4", 00:32:24.483 "trsvcid": "$NVMF_PORT", 00:32:24.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:24.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:24.483 "hdgst": ${hdgst:-false}, 00:32:24.483 "ddgst": ${ddgst:-false} 00:32:24.483 }, 00:32:24.483 "method": "bdev_nvme_attach_controller" 00:32:24.483 } 00:32:24.483 EOF 00:32:24.483 )") 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:24.483 "params": { 00:32:24.483 "name": "Nvme0", 00:32:24.483 "trtype": "tcp", 00:32:24.483 "traddr": "10.0.0.2", 00:32:24.483 "adrfam": "ipv4", 00:32:24.483 "trsvcid": "4420", 00:32:24.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:24.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:24.483 "hdgst": false, 00:32:24.483 "ddgst": false 00:32:24.483 }, 00:32:24.483 "method": "bdev_nvme_attach_controller" 00:32:24.483 }' 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:24.483 14:26:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:24.741 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:24.741 ... 00:32:24.741 fio-3.35 00:32:24.741 Starting 3 threads 00:32:24.741 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.293 00:32:31.293 filename0: (groupid=0, jobs=1): err= 0: pid=2656951: Fri Jul 26 14:26:47 2024 00:32:31.293 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(121MiB/5013msec) 00:32:31.293 slat (nsec): min=4863, max=49129, avg=19085.30, stdev=5747.64 00:32:31.293 clat (usec): min=5604, max=92825, avg=15458.62, stdev=12542.48 00:32:31.293 lat (usec): min=5621, max=92845, avg=15477.70, stdev=12542.69 00:32:31.293 clat percentiles (usec): 00:32:31.293 | 1.00th=[ 6194], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[ 9503], 00:32:31.293 | 30.00th=[10290], 40.00th=[11469], 50.00th=[12387], 60.00th=[13042], 00:32:31.294 | 70.00th=[13698], 80.00th=[14615], 90.00th=[17695], 95.00th=[53216], 00:32:31.294 | 99.00th=[57410], 99.50th=[57934], 99.90th=[92799], 99.95th=[92799], 00:32:31.294 | 99.99th=[92799] 00:32:31.294 bw ( KiB/s): min=18688, max=35328, per=33.36%, avg=24785.30, stdev=5366.19, samples=10 00:32:31.294 iops : min= 146, max= 276, avg=193.60, stdev=41.94, samples=10 00:32:31.294 lat (msec) : 10=27.81%, 20=63.54%, 50=1.03%, 100=7.62% 00:32:31.294 cpu : usr=94.41%, sys=5.07%, ctx=13, majf=0, minf=106 00:32:31.294 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.294 issued rwts: total=971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.294 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:31.294 filename0: (groupid=0, jobs=1): err= 0: pid=2656952: Fri Jul 26 14:26:47 2024 00:32:31.294 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(112MiB/5006msec) 00:32:31.294 slat (nsec): min=4949, max=42392, avg=17892.64, stdev=5084.42 00:32:31.294 clat (usec): min=5837, max=62910, avg=16794.21, stdev=11870.48 00:32:31.294 lat (usec): min=5859, max=62926, avg=16812.10, stdev=11870.28 00:32:31.294 clat percentiles (usec): 00:32:31.294 | 1.00th=[ 6194], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10421], 00:32:31.294 | 30.00th=[11469], 40.00th=[12649], 50.00th=[13566], 60.00th=[14615], 00:32:31.294 | 70.00th=[15664], 80.00th=[17171], 90.00th=[20317], 95.00th=[52167], 00:32:31.294 | 99.00th=[57934], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:32:31.294 | 99.99th=[62653] 00:32:31.294 bw ( KiB/s): min=16384, max=28160, per=30.66%, avg=22784.00, stdev=3648.44, samples=10 00:32:31.294 iops : min= 128, max= 220, avg=178.00, stdev=28.50, samples=10 00:32:31.294 lat (msec) : 10=15.45%, 20=74.47%, 50=2.69%, 100=7.39% 00:32:31.294 cpu : usr=94.15%, sys=5.39%, ctx=15, majf=0, minf=115 00:32:31.294 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.294 issued rwts: total=893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.294 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:31.294 filename0: (groupid=0, jobs=1): err= 0: pid=2656953: Fri Jul 26 14:26:47 2024 00:32:31.294 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(131MiB/5003msec) 00:32:31.294 slat (nsec): min=5262, max=49314, avg=16418.93, stdev=4388.83 00:32:31.294 clat (usec): min=5252, max=89231, avg=14325.09, stdev=10685.34 00:32:31.294 lat (usec): min=5266, max=89247, avg=14341.51, stdev=10685.57 00:32:31.294 clat percentiles (usec): 00:32:31.294 | 1.00th=[ 6259], 5.00th=[ 6849], 10.00th=[ 8225], 20.00th=[ 9241], 00:32:31.294 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11994], 60.00th=[12911], 00:32:31.294 | 70.00th=[13698], 80.00th=[14484], 90.00th=[16057], 95.00th=[50070], 00:32:31.294 | 99.00th=[54264], 99.50th=[55837], 99.90th=[57934], 99.95th=[89654], 00:32:31.294 | 99.99th=[89654] 00:32:31.294 bw ( KiB/s): min=17152, max=34304, per=35.97%, avg=26726.40, stdev=6132.37, samples=10 00:32:31.294 iops : min= 134, max= 268, avg=208.80, stdev=47.91, samples=10 00:32:31.294 lat (msec) : 10=31.64%, 20=61.57%, 50=1.63%, 100=5.16% 00:32:31.294 cpu : usr=92.72%, sys=6.74%, ctx=13, majf=0, minf=183 00:32:31.294 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.294 issued rwts: total=1046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.294 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:31.294 00:32:31.294 Run status group 0 (all jobs): 00:32:31.294 READ: bw=72.6MiB/s (76.1MB/s), 22.3MiB/s-26.1MiB/s (23.4MB/s-27.4MB/s), io=364MiB (381MB), run=5003-5013msec 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 bdev_null0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 [2024-07-26 14:26:47.283388] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 bdev_null1 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 bdev_null2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:31.294 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:31.295 { 00:32:31.295 "params": { 00:32:31.295 "name": "Nvme$subsystem", 00:32:31.295 "trtype": "$TEST_TRANSPORT", 00:32:31.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.295 "adrfam": "ipv4", 00:32:31.295 "trsvcid": "$NVMF_PORT", 00:32:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.295 "hdgst": ${hdgst:-false}, 00:32:31.295 "ddgst": ${ddgst:-false} 00:32:31.295 }, 00:32:31.295 "method": "bdev_nvme_attach_controller" 00:32:31.295 } 00:32:31.295 EOF 00:32:31.295 )") 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:31.295 { 00:32:31.295 "params": { 00:32:31.295 "name": "Nvme$subsystem", 00:32:31.295 "trtype": "$TEST_TRANSPORT", 00:32:31.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.295 "adrfam": "ipv4", 00:32:31.295 "trsvcid": "$NVMF_PORT", 00:32:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.295 "hdgst": ${hdgst:-false}, 00:32:31.295 "ddgst": ${ddgst:-false} 00:32:31.295 }, 00:32:31.295 "method": "bdev_nvme_attach_controller" 00:32:31.295 } 00:32:31.295 EOF 00:32:31.295 )") 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:31.295 { 00:32:31.295 "params": { 00:32:31.295 "name": "Nvme$subsystem", 00:32:31.295 "trtype": "$TEST_TRANSPORT", 00:32:31.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.295 "adrfam": "ipv4", 00:32:31.295 "trsvcid": "$NVMF_PORT", 00:32:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.295 "hdgst": ${hdgst:-false}, 00:32:31.295 "ddgst": ${ddgst:-false} 00:32:31.295 }, 00:32:31.295 "method": "bdev_nvme_attach_controller" 00:32:31.295 } 00:32:31.295 EOF 00:32:31.295 )") 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:31.295 "params": { 00:32:31.295 "name": "Nvme0", 00:32:31.295 "trtype": "tcp", 00:32:31.295 "traddr": "10.0.0.2", 00:32:31.295 "adrfam": "ipv4", 00:32:31.295 "trsvcid": "4420", 00:32:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.295 "hdgst": false, 00:32:31.295 "ddgst": false 00:32:31.295 }, 00:32:31.295 "method": "bdev_nvme_attach_controller" 00:32:31.295 },{ 00:32:31.295 "params": { 00:32:31.295 "name": "Nvme1", 00:32:31.295 "trtype": "tcp", 00:32:31.295 "traddr": "10.0.0.2", 00:32:31.295 "adrfam": "ipv4", 00:32:31.295 "trsvcid": "4420", 00:32:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:31.295 "hdgst": false, 00:32:31.295 "ddgst": false 00:32:31.295 }, 00:32:31.295 "method": "bdev_nvme_attach_controller" 00:32:31.295 },{ 00:32:31.295 "params": { 00:32:31.295 "name": "Nvme2", 00:32:31.295 "trtype": "tcp", 00:32:31.295 "traddr": "10.0.0.2", 00:32:31.295 "adrfam": "ipv4", 00:32:31.295 "trsvcid": "4420", 00:32:31.295 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:31.295 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:31.295 "hdgst": false, 00:32:31.295 "ddgst": false 00:32:31.295 }, 00:32:31.295 "method": "bdev_nvme_attach_controller" 00:32:31.295 }' 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:31.295 14:26:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.295 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:31.295 ... 00:32:31.295 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:31.295 ... 00:32:31.295 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:31.295 ... 00:32:31.295 fio-3.35 00:32:31.295 Starting 24 threads 00:32:31.295 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.510 00:32:43.510 filename0: (groupid=0, jobs=1): err= 0: pid=2657787: Fri Jul 26 14:26:58 2024 00:32:43.510 read: IOPS=78, BW=316KiB/s (323kB/s)(3192KiB/10114msec) 00:32:43.510 slat (nsec): min=9077, max=82441, avg=14196.85, stdev=11888.91 00:32:43.510 clat (msec): min=144, max=402, avg=202.08, stdev=30.47 00:32:43.510 lat (msec): min=144, max=402, avg=202.09, stdev=30.48 00:32:43.510 clat percentiles (msec): 00:32:43.510 | 1.00th=[ 146], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:32:43.510 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 201], 00:32:43.510 | 70.00th=[ 203], 80.00th=[ 209], 90.00th=[ 234], 95.00th=[ 288], 00:32:43.510 | 99.00th=[ 300], 99.50th=[ 305], 99.90th=[ 405], 99.95th=[ 405], 00:32:43.510 | 99.99th=[ 405] 00:32:43.510 bw ( KiB/s): min= 240, max= 384, per=5.12%, avg=312.80, stdev=45.40, samples=20 00:32:43.510 iops : min= 60, max= 96, avg=78.20, stdev=11.35, samples=20 00:32:43.510 lat (msec) : 250=90.98%, 500=9.02% 00:32:43.510 cpu : usr=98.17%, sys=1.43%, ctx=14, majf=0, minf=10 00:32:43.510 IO depths : 1=0.5%, 2=1.4%, 4=8.6%, 8=77.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:43.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.510 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.510 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.510 filename0: (groupid=0, jobs=1): err= 0: pid=2657788: Fri Jul 26 14:26:58 2024 00:32:43.510 read: IOPS=84, BW=337KiB/s (345kB/s)(3416KiB/10133msec) 00:32:43.510 slat (nsec): min=6771, max=94245, avg=64804.16, stdev=15047.15 00:32:43.510 clat (msec): min=40, max=306, avg=189.13, stdev=31.22 00:32:43.510 lat (msec): min=40, max=306, avg=189.20, stdev=31.23 00:32:43.510 clat percentiles (msec): 00:32:43.510 | 1.00th=[ 41], 5.00th=[ 159], 10.00th=[ 180], 20.00th=[ 184], 00:32:43.510 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 197], 00:32:43.510 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 209], 00:32:43.510 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:32:43.510 | 99.99th=[ 305] 00:32:43.510 bw ( KiB/s): min= 256, max= 432, per=5.50%, avg=335.20, stdev=41.03, samples=20 00:32:43.510 iops : min= 64, max= 108, avg=83.80, stdev=10.26, samples=20 00:32:43.510 lat (msec) : 50=1.64%, 100=2.11%, 250=94.85%, 500=1.41% 00:32:43.510 cpu : usr=98.13%, sys=1.42%, ctx=11, majf=0, minf=9 00:32:43.510 IO depths : 1=0.2%, 2=0.7%, 4=7.6%, 8=79.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:32:43.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.510 complete : 0=0.0%, 4=89.2%, 8=5.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.510 issued rwts: total=854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.510 filename0: (groupid=0, jobs=1): err= 0: pid=2657789: Fri Jul 26 14:26:58 2024 00:32:43.510 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10110msec) 00:32:43.510 slat (usec): min=8, max=122, avg=60.53, stdev=20.37 00:32:43.510 clat (msec): min=210, max=379, avg=288.35, stdev=17.90 00:32:43.510 lat (msec): min=210, max=379, avg=288.41, stdev=17.91 00:32:43.510 clat percentiles (msec): 00:32:43.510 | 1.00th=[ 218], 5.00th=[ 253], 10.00th=[ 279], 20.00th=[ 288], 00:32:43.510 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.510 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 300], 00:32:43.510 | 99.00th=[ 313], 99.50th=[ 376], 99.90th=[ 380], 99.95th=[ 380], 00:32:43.510 | 99.99th=[ 380] 00:32:43.510 bw ( KiB/s): min= 128, max= 256, per=3.56%, avg=217.60, stdev=60.18, samples=20 00:32:43.510 iops : min= 32, max= 64, avg=54.40, stdev=15.05, samples=20 00:32:43.510 lat (msec) : 250=3.57%, 500=96.43% 00:32:43.510 cpu : usr=98.27%, sys=1.28%, ctx=18, majf=0, minf=9 00:32:43.510 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:43.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.510 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.510 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.510 filename0: (groupid=0, jobs=1): err= 0: pid=2657790: Fri Jul 26 14:26:58 2024 00:32:43.510 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10092msec) 00:32:43.510 slat (nsec): min=12456, max=89903, avg=32051.61, stdev=11537.82 00:32:43.510 clat (msec): min=210, max=454, avg=296.52, stdev=28.93 00:32:43.510 lat (msec): min=210, max=454, avg=296.56, stdev=28.93 00:32:43.510 clat percentiles (msec): 00:32:43.510 | 1.00th=[ 279], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.510 | 30.00th=[ 292], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.510 | 70.00th=[ 296], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 313], 00:32:43.510 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:32:43.510 | 99.99th=[ 456] 00:32:43.510 bw ( KiB/s): min= 128, max= 256, per=3.46%, avg=211.20, stdev=62.64, samples=20 00:32:43.510 iops : min= 32, max= 64, avg=52.80, stdev=15.66, samples=20 00:32:43.510 lat (msec) : 250=0.37%, 500=99.63% 00:32:43.510 cpu : usr=97.28%, sys=1.85%, ctx=36, majf=0, minf=9 00:32:43.511 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:43.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.511 filename0: (groupid=0, jobs=1): err= 0: pid=2657791: Fri Jul 26 14:26:58 2024 00:32:43.511 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10095msec) 00:32:43.511 slat (usec): min=25, max=100, avg=72.76, stdev=10.52 00:32:43.511 clat (msec): min=187, max=547, avg=296.27, stdev=48.25 00:32:43.511 lat (msec): min=187, max=547, avg=296.34, stdev=48.24 00:32:43.511 clat percentiles (msec): 00:32:43.511 | 1.00th=[ 197], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.511 | 30.00th=[ 288], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.511 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:32:43.511 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:32:43.511 | 99.99th=[ 550] 00:32:43.511 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=222.32, stdev=57.91, samples=19 00:32:43.511 iops : min= 32, max= 64, avg=55.58, stdev=14.48, samples=19 00:32:43.511 lat (msec) : 250=4.04%, 500=93.01%, 750=2.94% 00:32:43.511 cpu : usr=98.29%, sys=1.29%, ctx=7, majf=0, minf=10 00:32:43.511 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:32:43.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.511 filename0: (groupid=0, jobs=1): err= 0: pid=2657792: Fri Jul 26 14:26:58 2024 00:32:43.511 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10106msec) 00:32:43.511 slat (nsec): min=17836, max=98056, avg=69399.33, stdev=13194.96 00:32:43.511 clat (msec): min=109, max=461, avg=288.15, stdev=47.46 00:32:43.511 lat (msec): min=109, max=461, avg=288.22, stdev=47.46 00:32:43.511 clat percentiles (msec): 00:32:43.511 | 1.00th=[ 110], 5.00th=[ 178], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.511 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.511 | 70.00th=[ 296], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 363], 00:32:43.511 | 99.00th=[ 414], 99.50th=[ 422], 99.90th=[ 460], 99.95th=[ 460], 00:32:43.511 | 99.99th=[ 460] 00:32:43.511 bw ( KiB/s): min= 128, max= 272, per=3.56%, avg=217.50, stdev=58.76, samples=20 00:32:43.511 iops : min= 32, max= 68, avg=54.35, stdev=14.67, samples=20 00:32:43.511 lat (msec) : 250=7.14%, 500=92.86% 00:32:43.511 cpu : usr=98.05%, sys=1.49%, ctx=15, majf=0, minf=9 00:32:43.511 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:32:43.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.511 filename0: (groupid=0, jobs=1): err= 0: pid=2657793: Fri Jul 26 14:26:58 2024 00:32:43.511 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10095msec) 00:32:43.511 slat (usec): min=27, max=112, avg=71.56, stdev=10.60 00:32:43.511 clat (msec): min=110, max=547, avg=296.33, stdev=57.71 00:32:43.511 lat (msec): min=110, max=547, avg=296.40, stdev=57.70 00:32:43.511 clat percentiles (msec): 00:32:43.511 | 1.00th=[ 188], 5.00th=[ 197], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.511 | 30.00th=[ 288], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.511 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 393], 00:32:43.511 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:32:43.511 | 99.99th=[ 550] 00:32:43.511 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=222.32, stdev=54.36, samples=19 00:32:43.511 iops : min= 32, max= 64, avg=55.58, stdev=13.59, samples=19 00:32:43.511 lat (msec) : 250=8.46%, 500=88.60%, 750=2.94% 00:32:43.511 cpu : usr=98.04%, sys=1.52%, ctx=20, majf=0, minf=9 00:32:43.511 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:32:43.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.511 filename0: (groupid=0, jobs=1): err= 0: pid=2657794: Fri Jul 26 14:26:58 2024 00:32:43.511 read: IOPS=83, BW=335KiB/s (343kB/s)(3392KiB/10130msec) 00:32:43.511 slat (nsec): min=5110, max=88563, avg=14064.65, stdev=6704.82 00:32:43.511 clat (msec): min=43, max=251, avg=189.26, stdev=33.19 00:32:43.511 lat (msec): min=43, max=251, avg=189.27, stdev=33.19 00:32:43.511 clat percentiles (msec): 00:32:43.511 | 1.00th=[ 44], 5.00th=[ 128], 10.00th=[ 174], 20.00th=[ 176], 00:32:43.511 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:32:43.511 | 70.00th=[ 203], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 224], 00:32:43.511 | 99.00th=[ 251], 99.50th=[ 251], 99.90th=[ 251], 99.95th=[ 251], 00:32:43.511 | 99.99th=[ 251] 00:32:43.511 bw ( KiB/s): min= 256, max= 512, per=5.45%, avg=332.80, stdev=68.79, samples=20 00:32:43.511 iops : min= 64, max= 128, avg=83.20, stdev=17.20, samples=20 00:32:43.511 lat (msec) : 50=1.89%, 100=1.89%, 250=94.34%, 500=1.89% 00:32:43.511 cpu : usr=97.17%, sys=2.01%, ctx=48, majf=0, minf=9 00:32:43.511 IO depths : 1=0.9%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:32:43.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.511 filename1: (groupid=0, jobs=1): err= 0: pid=2657795: Fri Jul 26 14:26:58 2024 00:32:43.511 read: IOPS=55, BW=221KiB/s (227kB/s)(2240KiB/10115msec) 00:32:43.511 slat (nsec): min=10408, max=99367, avg=70385.71, stdev=13126.91 00:32:43.511 clat (msec): min=214, max=309, avg=288.35, stdev=15.10 00:32:43.511 lat (msec): min=214, max=309, avg=288.42, stdev=15.10 00:32:43.511 clat percentiles (msec): 00:32:43.511 | 1.00th=[ 215], 5.00th=[ 257], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.511 | 30.00th=[ 288], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.511 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:32:43.511 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:32:43.511 | 99.99th=[ 309] 00:32:43.511 bw ( KiB/s): min= 128, max= 256, per=3.56%, avg=217.60, stdev=60.18, samples=20 00:32:43.511 iops : min= 32, max= 64, avg=54.40, stdev=15.05, samples=20 00:32:43.511 lat (msec) : 250=2.86%, 500=97.14% 00:32:43.511 cpu : usr=98.24%, sys=1.32%, ctx=13, majf=0, minf=9 00:32:43.511 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:43.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.511 filename1: (groupid=0, jobs=1): err= 0: pid=2657796: Fri Jul 26 14:26:58 2024 00:32:43.511 read: IOPS=54, BW=217KiB/s (223kB/s)(2176KiB/10005msec) 00:32:43.511 slat (usec): min=14, max=101, avg=59.61, stdev=24.79 00:32:43.511 clat (msec): min=174, max=418, avg=293.74, stdev=31.04 00:32:43.511 lat (msec): min=174, max=418, avg=293.80, stdev=31.04 00:32:43.511 clat percentiles (msec): 00:32:43.511 | 1.00th=[ 178], 5.00th=[ 275], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.511 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.511 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 372], 00:32:43.511 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 418], 99.95th=[ 418], 00:32:43.511 | 99.99th=[ 418] 00:32:43.511 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=215.58, stdev=57.78, samples=19 00:32:43.511 iops : min= 32, max= 64, avg=53.89, stdev=14.44, samples=19 00:32:43.511 lat (msec) : 250=2.94%, 500=97.06% 00:32:43.511 cpu : usr=98.08%, sys=1.47%, ctx=36, majf=0, minf=9 00:32:43.511 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:32:43.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.511 filename1: (groupid=0, jobs=1): err= 0: pid=2657797: Fri Jul 26 14:26:58 2024 00:32:43.511 read: IOPS=59, BW=237KiB/s (242kB/s)(2368KiB/10004msec) 00:32:43.511 slat (nsec): min=15516, max=99265, avg=60523.49, stdev=22213.81 00:32:43.511 clat (msec): min=168, max=413, avg=269.88, stdev=44.81 00:32:43.511 lat (msec): min=168, max=413, avg=269.94, stdev=44.83 00:32:43.511 clat percentiles (msec): 00:32:43.511 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 209], 00:32:43.511 | 30.00th=[ 284], 40.00th=[ 288], 50.00th=[ 288], 60.00th=[ 292], 00:32:43.511 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:32:43.511 | 99.00th=[ 409], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:32:43.511 | 99.99th=[ 414] 00:32:43.511 bw ( KiB/s): min= 128, max= 384, per=3.84%, avg=234.95, stdev=74.29, samples=19 00:32:43.511 iops : min= 32, max= 96, avg=58.74, stdev=18.57, samples=19 00:32:43.511 lat (msec) : 250=23.65%, 500=76.35% 00:32:43.511 cpu : usr=98.21%, sys=1.33%, ctx=61, majf=0, minf=9 00:32:43.511 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:32:43.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.511 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.511 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.511 filename1: (groupid=0, jobs=1): err= 0: pid=2657798: Fri Jul 26 14:26:58 2024 00:32:43.511 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10095msec) 00:32:43.511 slat (nsec): min=14803, max=50581, avg=28415.48, stdev=5530.95 00:32:43.511 clat (msec): min=185, max=546, avg=296.65, stdev=48.06 00:32:43.512 lat (msec): min=185, max=546, avg=296.68, stdev=48.06 00:32:43.512 clat percentiles (msec): 00:32:43.512 | 1.00th=[ 194], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.512 | 30.00th=[ 288], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.512 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 309], 00:32:43.512 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:32:43.512 | 99.99th=[ 550] 00:32:43.512 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=222.32, stdev=56.16, samples=19 00:32:43.512 iops : min= 32, max= 64, avg=55.58, stdev=14.04, samples=19 00:32:43.512 lat (msec) : 250=4.04%, 500=93.01%, 750=2.94% 00:32:43.512 cpu : usr=95.84%, sys=2.61%, ctx=201, majf=0, minf=9 00:32:43.512 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:32:43.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.512 filename1: (groupid=0, jobs=1): err= 0: pid=2657799: Fri Jul 26 14:26:58 2024 00:32:43.512 read: IOPS=84, BW=337KiB/s (345kB/s)(3416KiB/10131msec) 00:32:43.512 slat (nsec): min=8146, max=93236, avg=17685.43, stdev=16005.27 00:32:43.512 clat (msec): min=13, max=236, avg=189.01, stdev=28.59 00:32:43.512 lat (msec): min=13, max=236, avg=189.03, stdev=28.59 00:32:43.512 clat percentiles (msec): 00:32:43.512 | 1.00th=[ 41], 5.00th=[ 159], 10.00th=[ 182], 20.00th=[ 184], 00:32:43.512 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 197], 00:32:43.512 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 209], 00:32:43.512 | 99.00th=[ 215], 99.50th=[ 236], 99.90th=[ 236], 99.95th=[ 236], 00:32:43.512 | 99.99th=[ 236] 00:32:43.512 bw ( KiB/s): min= 272, max= 496, per=5.50%, avg=335.20, stdev=48.55, samples=20 00:32:43.512 iops : min= 68, max= 124, avg=83.80, stdev=12.14, samples=20 00:32:43.512 lat (msec) : 20=0.82%, 50=1.05%, 100=1.64%, 250=96.49% 00:32:43.512 cpu : usr=97.93%, sys=1.56%, ctx=56, majf=0, minf=9 00:32:43.512 IO depths : 1=0.1%, 2=0.6%, 4=7.5%, 8=79.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:43.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 complete : 0=0.0%, 4=89.2%, 8=5.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 issued rwts: total=854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.512 filename1: (groupid=0, jobs=1): err= 0: pid=2657800: Fri Jul 26 14:26:58 2024 00:32:43.512 read: IOPS=53, BW=215KiB/s (221kB/s)(2176KiB/10102msec) 00:32:43.512 slat (usec): min=19, max=106, avg=72.75, stdev=13.61 00:32:43.512 clat (msec): min=215, max=459, avg=296.37, stdev=32.97 00:32:43.512 lat (msec): min=215, max=459, avg=296.44, stdev=32.96 00:32:43.512 clat percentiles (msec): 00:32:43.512 | 1.00th=[ 222], 5.00th=[ 279], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.512 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.512 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 359], 00:32:43.512 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 460], 99.95th=[ 460], 00:32:43.512 | 99.99th=[ 460] 00:32:43.512 bw ( KiB/s): min= 128, max= 256, per=3.46%, avg=211.20, stdev=61.11, samples=20 00:32:43.512 iops : min= 32, max= 64, avg=52.80, stdev=15.28, samples=20 00:32:43.512 lat (msec) : 250=2.21%, 500=97.79% 00:32:43.512 cpu : usr=98.05%, sys=1.50%, ctx=9, majf=0, minf=9 00:32:43.512 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:32:43.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.512 filename1: (groupid=0, jobs=1): err= 0: pid=2657801: Fri Jul 26 14:26:58 2024 00:32:43.512 read: IOPS=53, BW=215KiB/s (221kB/s)(2176KiB/10098msec) 00:32:43.512 slat (nsec): min=11083, max=77981, avg=33440.39, stdev=8212.64 00:32:43.512 clat (msec): min=184, max=548, avg=296.69, stdev=46.97 00:32:43.512 lat (msec): min=185, max=548, avg=296.73, stdev=46.97 00:32:43.512 clat percentiles (msec): 00:32:43.512 | 1.00th=[ 215], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.512 | 30.00th=[ 288], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.512 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:32:43.512 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:32:43.512 | 99.99th=[ 550] 00:32:43.512 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=222.32, stdev=56.16, samples=19 00:32:43.512 iops : min= 32, max= 64, avg=55.58, stdev=14.04, samples=19 00:32:43.512 lat (msec) : 250=3.31%, 500=93.75%, 750=2.94% 00:32:43.512 cpu : usr=97.98%, sys=1.57%, ctx=8, majf=0, minf=9 00:32:43.512 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:43.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.512 filename1: (groupid=0, jobs=1): err= 0: pid=2657802: Fri Jul 26 14:26:58 2024 00:32:43.512 read: IOPS=75, BW=304KiB/s (311kB/s)(3080KiB/10134msec) 00:32:43.512 slat (usec): min=5, max=116, avg=61.43, stdev=16.38 00:32:43.512 clat (msec): min=42, max=373, avg=209.60, stdev=56.27 00:32:43.512 lat (msec): min=42, max=373, avg=209.66, stdev=56.27 00:32:43.512 clat percentiles (msec): 00:32:43.512 | 1.00th=[ 43], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 182], 00:32:43.512 | 30.00th=[ 184], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 201], 00:32:43.512 | 70.00th=[ 205], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 300], 00:32:43.512 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 372], 99.95th=[ 372], 00:32:43.512 | 99.99th=[ 372] 00:32:43.512 bw ( KiB/s): min= 128, max= 495, per=4.94%, avg=301.55, stdev=73.32, samples=20 00:32:43.512 iops : min= 32, max= 123, avg=75.35, stdev=18.23, samples=20 00:32:43.512 lat (msec) : 50=2.08%, 100=2.08%, 250=74.55%, 500=21.30% 00:32:43.512 cpu : usr=98.23%, sys=1.32%, ctx=26, majf=0, minf=9 00:32:43.512 IO depths : 1=1.3%, 2=4.5%, 4=15.8%, 8=67.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:32:43.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 issued rwts: total=770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.512 filename2: (groupid=0, jobs=1): err= 0: pid=2657803: Fri Jul 26 14:26:58 2024 00:32:43.512 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10106msec) 00:32:43.512 slat (usec): min=6, max=108, avg=72.77, stdev=14.24 00:32:43.512 clat (msec): min=222, max=467, avg=296.58, stdev=31.87 00:32:43.512 lat (msec): min=222, max=467, avg=296.65, stdev=31.86 00:32:43.512 clat percentiles (msec): 00:32:43.512 | 1.00th=[ 279], 5.00th=[ 279], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.512 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.512 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 313], 00:32:43.512 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:32:43.512 | 99.99th=[ 468] 00:32:43.512 bw ( KiB/s): min= 128, max= 256, per=3.46%, avg=211.20, stdev=62.64, samples=20 00:32:43.512 iops : min= 32, max= 64, avg=52.80, stdev=15.66, samples=20 00:32:43.512 lat (msec) : 250=0.74%, 500=99.26% 00:32:43.512 cpu : usr=98.16%, sys=1.41%, ctx=10, majf=0, minf=9 00:32:43.512 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:43.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.512 filename2: (groupid=0, jobs=1): err= 0: pid=2657804: Fri Jul 26 14:26:58 2024 00:32:43.512 read: IOPS=62, BW=249KiB/s (255kB/s)(2520KiB/10109msec) 00:32:43.512 slat (usec): min=15, max=113, avg=63.23, stdev=21.96 00:32:43.512 clat (msec): min=174, max=364, avg=256.24, stdev=44.80 00:32:43.512 lat (msec): min=174, max=364, avg=256.30, stdev=44.81 00:32:43.512 clat percentiles (msec): 00:32:43.512 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 197], 00:32:43.512 | 30.00th=[ 211], 40.00th=[ 257], 50.00th=[ 284], 60.00th=[ 288], 00:32:43.512 | 70.00th=[ 292], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 300], 00:32:43.512 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 363], 00:32:43.512 | 99.99th=[ 363] 00:32:43.512 bw ( KiB/s): min= 128, max= 368, per=4.02%, avg=245.60, stdev=73.46, samples=20 00:32:43.512 iops : min= 32, max= 92, avg=61.40, stdev=18.37, samples=20 00:32:43.512 lat (msec) : 250=36.51%, 500=63.49% 00:32:43.512 cpu : usr=98.23%, sys=1.33%, ctx=14, majf=0, minf=9 00:32:43.512 IO depths : 1=3.8%, 2=8.9%, 4=21.4%, 8=57.1%, 16=8.7%, 32=0.0%, >=64=0.0% 00:32:43.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.512 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.512 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.512 filename2: (groupid=0, jobs=1): err= 0: pid=2657805: Fri Jul 26 14:26:58 2024 00:32:43.512 read: IOPS=55, BW=221KiB/s (226kB/s)(2232KiB/10115msec) 00:32:43.512 slat (usec): min=14, max=114, avg=58.66, stdev=25.66 00:32:43.512 clat (msec): min=178, max=417, avg=289.13, stdev=32.16 00:32:43.512 lat (msec): min=178, max=417, avg=289.19, stdev=32.16 00:32:43.512 clat percentiles (msec): 00:32:43.512 | 1.00th=[ 186], 5.00th=[ 215], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.512 | 30.00th=[ 288], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.512 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 372], 00:32:43.512 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 418], 00:32:43.512 | 99.99th=[ 418] 00:32:43.512 bw ( KiB/s): min= 128, max= 256, per=3.56%, avg=217.60, stdev=51.49, samples=20 00:32:43.512 iops : min= 32, max= 64, avg=54.40, stdev=12.87, samples=20 00:32:43.512 lat (msec) : 250=7.17%, 500=92.83% 00:32:43.513 cpu : usr=98.11%, sys=1.44%, ctx=14, majf=0, minf=9 00:32:43.513 IO depths : 1=3.2%, 2=9.3%, 4=24.6%, 8=53.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:32:43.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.513 filename2: (groupid=0, jobs=1): err= 0: pid=2657806: Fri Jul 26 14:26:58 2024 00:32:43.513 read: IOPS=76, BW=304KiB/s (311kB/s)(3080KiB/10131msec) 00:32:43.513 slat (nsec): min=16243, max=96890, avg=64331.78, stdev=14341.28 00:32:43.513 clat (msec): min=39, max=393, avg=209.52, stdev=56.62 00:32:43.513 lat (msec): min=39, max=393, avg=209.59, stdev=56.63 00:32:43.513 clat percentiles (msec): 00:32:43.513 | 1.00th=[ 40], 5.00th=[ 140], 10.00th=[ 180], 20.00th=[ 184], 00:32:43.513 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 201], 00:32:43.513 | 70.00th=[ 203], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 300], 00:32:43.513 | 99.00th=[ 309], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:32:43.513 | 99.99th=[ 393] 00:32:43.513 bw ( KiB/s): min= 128, max= 512, per=4.94%, avg=301.60, stdev=84.39, samples=20 00:32:43.513 iops : min= 32, max= 128, avg=75.40, stdev=21.10, samples=20 00:32:43.513 lat (msec) : 50=2.08%, 100=2.08%, 250=73.51%, 500=22.34% 00:32:43.513 cpu : usr=98.08%, sys=1.47%, ctx=11, majf=0, minf=9 00:32:43.513 IO depths : 1=1.6%, 2=3.8%, 4=12.7%, 8=70.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:32:43.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 complete : 0=0.0%, 4=90.5%, 8=4.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 issued rwts: total=770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.513 filename2: (groupid=0, jobs=1): err= 0: pid=2657807: Fri Jul 26 14:26:58 2024 00:32:43.513 read: IOPS=82, BW=329KiB/s (337kB/s)(3328KiB/10127msec) 00:32:43.513 slat (nsec): min=5375, max=82400, avg=15411.60, stdev=15760.22 00:32:43.513 clat (msec): min=42, max=309, avg=194.60, stdev=39.35 00:32:43.513 lat (msec): min=42, max=309, avg=194.61, stdev=39.35 00:32:43.513 clat percentiles (msec): 00:32:43.513 | 1.00th=[ 43], 5.00th=[ 159], 10.00th=[ 174], 20.00th=[ 178], 00:32:43.513 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 197], 00:32:43.513 | 70.00th=[ 213], 80.00th=[ 213], 90.00th=[ 222], 95.00th=[ 284], 00:32:43.513 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:32:43.513 | 99.99th=[ 309] 00:32:43.513 bw ( KiB/s): min= 256, max= 384, per=5.35%, avg=326.35, stdev=65.29, samples=20 00:32:43.513 iops : min= 64, max= 96, avg=81.55, stdev=16.29, samples=20 00:32:43.513 lat (msec) : 50=1.92%, 100=1.92%, 250=90.38%, 500=5.77% 00:32:43.513 cpu : usr=97.82%, sys=1.75%, ctx=21, majf=0, minf=9 00:32:43.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:43.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.513 filename2: (groupid=0, jobs=1): err= 0: pid=2657808: Fri Jul 26 14:26:58 2024 00:32:43.513 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10093msec) 00:32:43.513 slat (nsec): min=14477, max=61604, avg=32301.65, stdev=6330.61 00:32:43.513 clat (msec): min=210, max=454, avg=296.57, stdev=38.39 00:32:43.513 lat (msec): min=210, max=454, avg=296.60, stdev=38.39 00:32:43.513 clat percentiles (msec): 00:32:43.513 | 1.00th=[ 213], 5.00th=[ 222], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.513 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.513 | 70.00th=[ 296], 80.00th=[ 296], 90.00th=[ 313], 95.00th=[ 372], 00:32:43.513 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:32:43.513 | 99.99th=[ 456] 00:32:43.513 bw ( KiB/s): min= 128, max= 256, per=3.46%, avg=211.20, stdev=61.11, samples=20 00:32:43.513 iops : min= 32, max= 64, avg=52.80, stdev=15.28, samples=20 00:32:43.513 lat (msec) : 250=5.88%, 500=94.12% 00:32:43.513 cpu : usr=97.32%, sys=2.07%, ctx=14, majf=0, minf=9 00:32:43.513 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:32:43.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.513 filename2: (groupid=0, jobs=1): err= 0: pid=2657809: Fri Jul 26 14:26:58 2024 00:32:43.513 read: IOPS=79, BW=319KiB/s (326kB/s)(3224KiB/10114msec) 00:32:43.513 slat (nsec): min=8889, max=33301, avg=12033.15, stdev=4490.61 00:32:43.513 clat (msec): min=121, max=294, avg=200.28, stdev=29.63 00:32:43.513 lat (msec): min=121, max=294, avg=200.29, stdev=29.63 00:32:43.513 clat percentiles (msec): 00:32:43.513 | 1.00th=[ 130], 5.00th=[ 157], 10.00th=[ 174], 20.00th=[ 182], 00:32:43.513 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 194], 60.00th=[ 201], 00:32:43.513 | 70.00th=[ 203], 80.00th=[ 222], 90.00th=[ 232], 95.00th=[ 255], 00:32:43.513 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 296], 99.95th=[ 296], 00:32:43.513 | 99.99th=[ 296] 00:32:43.513 bw ( KiB/s): min= 240, max= 384, per=5.19%, avg=316.00, stdev=47.82, samples=20 00:32:43.513 iops : min= 60, max= 96, avg=79.00, stdev=11.96, samples=20 00:32:43.513 lat (msec) : 250=92.56%, 500=7.44% 00:32:43.513 cpu : usr=98.06%, sys=1.50%, ctx=10, majf=0, minf=9 00:32:43.513 IO depths : 1=0.2%, 2=2.9%, 4=13.6%, 8=70.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:32:43.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 issued rwts: total=806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.513 filename2: (groupid=0, jobs=1): err= 0: pid=2657810: Fri Jul 26 14:26:58 2024 00:32:43.513 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10095msec) 00:32:43.513 slat (nsec): min=19327, max=97583, avg=35669.70, stdev=12161.43 00:32:43.513 clat (msec): min=110, max=631, avg=296.64, stdev=58.33 00:32:43.513 lat (msec): min=110, max=631, avg=296.68, stdev=58.33 00:32:43.513 clat percentiles (msec): 00:32:43.513 | 1.00th=[ 186], 5.00th=[ 201], 10.00th=[ 284], 20.00th=[ 288], 00:32:43.513 | 30.00th=[ 288], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 292], 00:32:43.513 | 70.00th=[ 296], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 393], 00:32:43.513 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 634], 99.95th=[ 634], 00:32:43.513 | 99.99th=[ 634] 00:32:43.513 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=222.32, stdev=56.16, samples=19 00:32:43.513 iops : min= 32, max= 64, avg=55.58, stdev=14.04, samples=19 00:32:43.513 lat (msec) : 250=8.09%, 500=88.97%, 750=2.94% 00:32:43.513 cpu : usr=98.15%, sys=1.42%, ctx=19, majf=0, minf=9 00:32:43.513 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:32:43.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.513 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:43.513 00:32:43.513 Run status group 0 (all jobs): 00:32:43.513 READ: bw=6091KiB/s (6237kB/s), 215KiB/s-337KiB/s (220kB/s-345kB/s), io=60.3MiB (63.2MB), run=10004-10134msec 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 bdev_null0 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.513 [2024-07-26 14:26:59.441011] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:43.513 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.514 bdev_null1 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:43.514 { 00:32:43.514 "params": { 00:32:43.514 "name": "Nvme$subsystem", 00:32:43.514 "trtype": "$TEST_TRANSPORT", 00:32:43.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.514 "adrfam": "ipv4", 00:32:43.514 "trsvcid": "$NVMF_PORT", 00:32:43.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.514 "hdgst": ${hdgst:-false}, 00:32:43.514 "ddgst": ${ddgst:-false} 00:32:43.514 }, 00:32:43.514 "method": "bdev_nvme_attach_controller" 00:32:43.514 } 00:32:43.514 EOF 00:32:43.514 )") 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:43.514 { 00:32:43.514 "params": { 00:32:43.514 "name": "Nvme$subsystem", 00:32:43.514 "trtype": "$TEST_TRANSPORT", 00:32:43.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.514 "adrfam": "ipv4", 00:32:43.514 "trsvcid": "$NVMF_PORT", 00:32:43.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.514 "hdgst": ${hdgst:-false}, 00:32:43.514 "ddgst": ${ddgst:-false} 00:32:43.514 }, 00:32:43.514 "method": "bdev_nvme_attach_controller" 00:32:43.514 } 00:32:43.514 EOF 00:32:43.514 )") 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:43.514 "params": { 00:32:43.514 "name": "Nvme0", 00:32:43.514 "trtype": "tcp", 00:32:43.514 "traddr": "10.0.0.2", 00:32:43.514 "adrfam": "ipv4", 00:32:43.514 "trsvcid": "4420", 00:32:43.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:43.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:43.514 "hdgst": false, 00:32:43.514 "ddgst": false 00:32:43.514 }, 00:32:43.514 "method": "bdev_nvme_attach_controller" 00:32:43.514 },{ 00:32:43.514 "params": { 00:32:43.514 "name": "Nvme1", 00:32:43.514 "trtype": "tcp", 00:32:43.514 "traddr": "10.0.0.2", 00:32:43.514 "adrfam": "ipv4", 00:32:43.514 "trsvcid": "4420", 00:32:43.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.514 "hdgst": false, 00:32:43.514 "ddgst": false 00:32:43.514 }, 00:32:43.514 "method": "bdev_nvme_attach_controller" 00:32:43.514 }' 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:43.514 14:26:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.514 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:43.514 ... 00:32:43.514 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:43.514 ... 00:32:43.514 fio-3.35 00:32:43.514 Starting 4 threads 00:32:43.514 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.824 00:32:48.824 filename0: (groupid=0, jobs=1): err= 0: pid=2659333: Fri Jul 26 14:27:05 2024 00:32:48.824 read: IOPS=1699, BW=13.3MiB/s (13.9MB/s)(66.5MiB/5004msec) 00:32:48.824 slat (nsec): min=4850, max=94011, avg=22382.98, stdev=11669.88 00:32:48.824 clat (usec): min=838, max=8364, avg=4630.99, stdev=566.68 00:32:48.824 lat (usec): min=858, max=8372, avg=4653.37, stdev=566.65 00:32:48.824 clat percentiles (usec): 00:32:48.824 | 1.00th=[ 3097], 5.00th=[ 3851], 10.00th=[ 4146], 20.00th=[ 4424], 00:32:48.824 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:32:48.824 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5407], 00:32:48.824 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8094], 99.95th=[ 8291], 00:32:48.824 | 99.99th=[ 8356] 00:32:48.824 bw ( KiB/s): min=12960, max=14464, per=25.27%, avg=13599.50, stdev=378.79, samples=10 00:32:48.824 iops : min= 1620, max= 1808, avg=1699.90, stdev=47.34, samples=10 00:32:48.824 lat (usec) : 1000=0.02% 00:32:48.824 lat (msec) : 2=0.18%, 4=7.14%, 10=92.66% 00:32:48.824 cpu : usr=91.45%, sys=6.48%, ctx=124, majf=0, minf=33 00:32:48.824 IO depths : 1=0.1%, 2=9.2%, 4=64.3%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.824 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.824 issued rwts: total=8506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.824 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:48.824 filename0: (groupid=0, jobs=1): err= 0: pid=2659334: Fri Jul 26 14:27:05 2024 00:32:48.824 read: IOPS=1683, BW=13.1MiB/s (13.8MB/s)(65.8MiB/5001msec) 00:32:48.824 slat (nsec): min=5125, max=79703, avg=21754.45, stdev=11449.44 00:32:48.824 clat (usec): min=1025, max=8177, avg=4680.51, stdev=608.62 00:32:48.824 lat (usec): min=1049, max=8192, avg=4702.26, stdev=608.49 00:32:48.824 clat percentiles (usec): 00:32:48.824 | 1.00th=[ 3064], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 4490], 00:32:48.824 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:32:48.824 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5735], 00:32:48.824 | 99.00th=[ 7046], 99.50th=[ 7373], 99.90th=[ 7767], 99.95th=[ 8029], 00:32:48.824 | 99.99th=[ 8160] 00:32:48.824 bw ( KiB/s): min=13104, max=14000, per=25.02%, avg=13469.67, stdev=297.35, samples=9 00:32:48.824 iops : min= 1638, max= 1750, avg=1683.67, stdev=37.12, samples=9 00:32:48.824 lat (msec) : 2=0.15%, 4=5.99%, 10=93.86% 00:32:48.824 cpu : usr=95.18%, sys=4.30%, ctx=15, majf=0, minf=68 00:32:48.824 IO depths : 1=0.1%, 2=10.3%, 4=62.5%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.824 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.824 issued rwts: total=8417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.824 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:48.824 filename1: (groupid=0, jobs=1): err= 0: pid=2659335: Fri Jul 26 14:27:05 2024 00:32:48.824 read: IOPS=1688, BW=13.2MiB/s (13.8MB/s)(66.5MiB/5042msec) 00:32:48.824 slat (nsec): min=5066, max=70100, avg=20832.19, stdev=11807.97 00:32:48.824 clat (usec): min=1065, max=46021, avg=4640.55, stdev=840.98 00:32:48.824 lat (usec): min=1090, max=46057, avg=4661.38, stdev=841.46 00:32:48.824 clat percentiles (usec): 00:32:48.824 | 1.00th=[ 2966], 5.00th=[ 3752], 10.00th=[ 4146], 20.00th=[ 4424], 00:32:48.824 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:32:48.824 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5538], 00:32:48.824 | 99.00th=[ 7111], 99.50th=[ 7439], 99.90th=[ 8160], 99.95th=[ 8225], 00:32:48.824 | 99.99th=[45876] 00:32:48.824 bw ( KiB/s): min=13312, max=14144, per=25.29%, avg=13614.40, stdev=269.37, samples=10 00:32:48.824 iops : min= 1664, max= 1768, avg=1701.80, stdev=33.67, samples=10 00:32:48.824 lat (msec) : 2=0.32%, 4=7.25%, 10=92.41%, 50=0.02% 00:32:48.824 cpu : usr=94.90%, sys=4.58%, ctx=9, majf=0, minf=73 00:32:48.824 IO depths : 1=0.1%, 2=10.8%, 4=62.1%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.824 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.824 issued rwts: total=8511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.824 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:48.824 filename1: (groupid=0, jobs=1): err= 0: pid=2659336: Fri Jul 26 14:27:05 2024 00:32:48.824 read: IOPS=1696, BW=13.3MiB/s (13.9MB/s)(66.3MiB/5002msec) 00:32:48.824 slat (usec): min=5, max=301, avg=22.14, stdev=12.33 00:32:48.824 clat (usec): min=1012, max=8303, avg=4641.04, stdev=532.38 00:32:48.824 lat (usec): min=1026, max=8312, avg=4663.19, stdev=532.83 00:32:48.824 clat percentiles (usec): 00:32:48.824 | 1.00th=[ 3130], 5.00th=[ 3884], 10.00th=[ 4228], 20.00th=[ 4424], 00:32:48.824 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:32:48.824 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5473], 00:32:48.824 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8160], 00:32:48.824 | 99.99th=[ 8291] 00:32:48.824 bw ( KiB/s): min=13312, max=14192, per=25.23%, avg=13579.20, stdev=278.26, samples=10 00:32:48.824 iops : min= 1664, max= 1774, avg=1697.40, stdev=34.78, samples=10 00:32:48.824 lat (msec) : 2=0.08%, 4=6.57%, 10=93.34% 00:32:48.824 cpu : usr=94.12%, sys=5.10%, ctx=84, majf=0, minf=33 00:32:48.824 IO depths : 1=0.1%, 2=10.0%, 4=62.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.824 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.824 issued rwts: total=8488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.824 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:48.824 00:32:48.824 Run status group 0 (all jobs): 00:32:48.824 READ: bw=52.6MiB/s (55.1MB/s), 13.1MiB/s-13.3MiB/s (13.8MB/s-13.9MB/s), io=265MiB (278MB), run=5001-5042msec 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.084 00:32:49.084 real 0m24.720s 00:32:49.084 user 4m35.202s 00:32:49.084 sys 0m6.631s 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 ************************************ 00:32:49.084 END TEST fio_dif_rand_params 00:32:49.084 ************************************ 00:32:49.084 14:27:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:49.084 14:27:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:49.084 14:27:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 ************************************ 00:32:49.084 START TEST fio_dif_digest 00:32:49.084 ************************************ 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 bdev_null0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:49.084 [2024-07-26 14:27:05.957627] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:49.084 { 00:32:49.084 "params": { 00:32:49.084 "name": "Nvme$subsystem", 00:32:49.084 "trtype": "$TEST_TRANSPORT", 00:32:49.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.084 "adrfam": "ipv4", 00:32:49.084 "trsvcid": "$NVMF_PORT", 00:32:49.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.084 "hdgst": ${hdgst:-false}, 00:32:49.084 "ddgst": ${ddgst:-false} 00:32:49.084 }, 00:32:49.084 "method": "bdev_nvme_attach_controller" 00:32:49.084 } 00:32:49.084 EOF 00:32:49.084 )") 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:49.084 14:27:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:49.084 "params": { 00:32:49.084 "name": "Nvme0", 00:32:49.084 "trtype": "tcp", 00:32:49.084 "traddr": "10.0.0.2", 00:32:49.084 "adrfam": "ipv4", 00:32:49.084 "trsvcid": "4420", 00:32:49.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:49.084 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:49.084 "hdgst": true, 00:32:49.084 "ddgst": true 00:32:49.084 }, 00:32:49.084 "method": "bdev_nvme_attach_controller" 00:32:49.084 }' 00:32:49.343 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:49.343 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:49.343 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:49.343 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.343 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:49.343 14:27:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:49.343 14:27:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:49.343 14:27:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:49.343 14:27:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:49.343 14:27:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:49.601 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:49.601 ... 00:32:49.601 fio-3.35 00:32:49.601 Starting 3 threads 00:32:49.601 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.802 00:33:01.802 filename0: (groupid=0, jobs=1): err= 0: pid=2660172: Fri Jul 26 14:27:16 2024 00:33:01.802 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(244MiB/10042msec) 00:33:01.802 slat (nsec): min=5238, max=44347, avg=18066.95, stdev=3781.26 00:33:01.802 clat (usec): min=6378, max=55342, avg=15345.21, stdev=2247.24 00:33:01.802 lat (usec): min=6392, max=55352, avg=15363.27, stdev=2247.49 00:33:01.802 clat percentiles (usec): 00:33:01.802 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[11994], 20.00th=[14353], 00:33:01.803 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15795], 60.00th=[16057], 00:33:01.803 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:33:01.803 | 99.00th=[18744], 99.50th=[19268], 99.90th=[25297], 99.95th=[55313], 00:33:01.803 | 99.99th=[55313] 00:33:01.803 bw ( KiB/s): min=23086, max=26624, per=34.89%, avg=25013.50, stdev=938.70, samples=20 00:33:01.803 iops : min= 180, max= 208, avg=195.40, stdev= 7.37, samples=20 00:33:01.803 lat (msec) : 10=1.59%, 20=98.16%, 50=0.20%, 100=0.05% 00:33:01.803 cpu : usr=94.05%, sys=5.46%, ctx=27, majf=0, minf=135 00:33:01.803 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.803 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.803 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:01.803 filename0: (groupid=0, jobs=1): err= 0: pid=2660173: Fri Jul 26 14:27:16 2024 00:33:01.803 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(249MiB/10046msec) 00:33:01.803 slat (nsec): min=4857, max=35964, avg=18072.21, stdev=3447.69 00:33:01.803 clat (usec): min=8339, max=58141, avg=15117.05, stdev=4676.55 00:33:01.803 lat (usec): min=8355, max=58158, avg=15135.13, stdev=4676.67 00:33:01.803 clat percentiles (usec): 00:33:01.803 | 1.00th=[ 9765], 5.00th=[10814], 10.00th=[12387], 20.00th=[13829], 00:33:01.803 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:33:01.803 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:33:01.803 | 99.00th=[54264], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:33:01.803 | 99.99th=[57934] 00:33:01.803 bw ( KiB/s): min=22016, max=27392, per=35.44%, avg=25408.00, stdev=1438.30, samples=20 00:33:01.803 iops : min= 172, max= 214, avg=198.50, stdev=11.24, samples=20 00:33:01.803 lat (msec) : 10=1.56%, 20=97.13%, 50=0.20%, 100=1.11% 00:33:01.803 cpu : usr=94.54%, sys=4.78%, ctx=31, majf=0, minf=124 00:33:01.803 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.803 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.803 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:01.803 filename0: (groupid=0, jobs=1): err= 0: pid=2660174: Fri Jul 26 14:27:16 2024 00:33:01.803 read: IOPS=167, BW=21.0MiB/s (22.0MB/s)(211MiB/10045msec) 00:33:01.803 slat (nsec): min=5753, max=60436, avg=21670.92, stdev=3824.99 00:33:01.803 clat (usec): min=9516, max=61436, avg=17842.01, stdev=7778.27 00:33:01.803 lat (usec): min=9533, max=61457, avg=17863.68, stdev=7778.36 00:33:01.803 clat percentiles (usec): 00:33:01.803 | 1.00th=[10683], 5.00th=[14091], 10.00th=[14877], 20.00th=[15533], 00:33:01.803 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16450], 60.00th=[16909], 00:33:01.803 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18482], 95.00th=[19530], 00:33:01.803 | 99.00th=[58983], 99.50th=[59507], 99.90th=[60556], 99.95th=[61604], 00:33:01.803 | 99.99th=[61604] 00:33:01.803 bw ( KiB/s): min=18944, max=23808, per=30.03%, avg=21529.60, stdev=1517.70, samples=20 00:33:01.803 iops : min= 148, max= 186, avg=168.20, stdev=11.86, samples=20 00:33:01.803 lat (msec) : 10=0.18%, 20=95.49%, 50=0.89%, 100=3.44% 00:33:01.803 cpu : usr=94.59%, sys=4.60%, ctx=58, majf=0, minf=108 00:33:01.803 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.803 issued rwts: total=1684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.803 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:01.803 00:33:01.803 Run status group 0 (all jobs): 00:33:01.803 READ: bw=70.0MiB/s (73.4MB/s), 21.0MiB/s-24.7MiB/s (22.0MB/s-25.9MB/s), io=703MiB (738MB), run=10042-10046msec 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.803 00:33:01.803 real 0m11.128s 00:33:01.803 user 0m29.692s 00:33:01.803 sys 0m1.763s 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:01.803 14:27:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.803 ************************************ 00:33:01.803 END TEST fio_dif_digest 00:33:01.803 ************************************ 00:33:01.803 14:27:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:01.803 14:27:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:01.803 rmmod nvme_tcp 00:33:01.803 rmmod nvme_fabrics 00:33:01.803 rmmod nvme_keyring 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2653893 ']' 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2653893 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2653893 ']' 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2653893 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2653893 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2653893' 00:33:01.803 killing process with pid 2653893 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2653893 00:33:01.803 14:27:17 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2653893 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:01.803 14:27:17 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:02.060 Waiting for block devices as requested 00:33:02.060 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:33:02.318 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:02.318 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:02.576 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:02.576 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:02.576 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:02.576 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:02.834 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:02.834 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:02.834 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:02.835 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:03.092 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:03.093 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:03.093 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:03.093 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:03.351 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:03.351 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:03.351 14:27:20 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:03.351 14:27:20 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:03.351 14:27:20 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:03.351 14:27:20 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:03.351 14:27:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.351 14:27:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:03.351 14:27:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.884 14:27:22 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:05.884 00:33:05.884 real 1m8.727s 00:33:05.884 user 6m33.510s 00:33:05.884 sys 0m18.888s 00:33:05.884 14:27:22 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:05.884 14:27:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:05.884 ************************************ 00:33:05.884 END TEST nvmf_dif 00:33:05.884 ************************************ 00:33:05.884 14:27:22 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:05.884 14:27:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:05.884 14:27:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:05.884 14:27:22 -- common/autotest_common.sh@10 -- # set +x 00:33:05.884 ************************************ 00:33:05.884 START TEST nvmf_abort_qd_sizes 00:33:05.884 ************************************ 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:05.884 * Looking for test storage... 00:33:05.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.884 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:33:05.885 14:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:08.438 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:08.438 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:08.438 Found net devices under 0000:84:00.0: cvl_0_0 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:08.438 Found net devices under 0000:84:00.1: cvl_0_1 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.438 14:27:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:08.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:33:08.438 00:33:08.438 --- 10.0.0.2 ping statistics --- 00:33:08.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.438 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:33:08.438 00:33:08.438 --- 10.0.0.1 ping statistics --- 00:33:08.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.438 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:08.438 14:27:25 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:09.817 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:09.817 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:09.817 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:09.817 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:09.817 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:09.817 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:09.817 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:09.817 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:09.817 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:09.817 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:09.817 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:09.817 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:09.817 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:09.817 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:09.817 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:09.817 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:10.793 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2665629 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2665629 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2665629 ']' 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:11.052 14:27:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:11.052 [2024-07-26 14:27:27.790649] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:33:11.052 [2024-07-26 14:27:27.790747] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.052 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.052 [2024-07-26 14:27:27.895409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:11.310 [2024-07-26 14:27:28.034847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.310 [2024-07-26 14:27:28.034910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.310 [2024-07-26 14:27:28.034926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.310 [2024-07-26 14:27:28.034939] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.310 [2024-07-26 14:27:28.034951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.310 [2024-07-26 14:27:28.038459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.310 [2024-07-26 14:27:28.038512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:11.310 [2024-07-26 14:27:28.038564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:11.310 [2024-07-26 14:27:28.038569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:33:11.310 14:27:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:11.569 14:27:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:33:11.569 14:27:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:11.569 14:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:11.569 14:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.569 14:27:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:11.569 ************************************ 00:33:11.569 START TEST spdk_target_abort 00:33:11.569 ************************************ 00:33:11.569 14:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:33:11.569 14:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:11.569 14:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:33:11.569 14:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.569 14:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 spdk_targetn1 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 [2024-07-26 14:27:31.068392] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 [2024-07-26 14:27:31.100692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:14.847 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:14.848 14:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:14.848 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.130 Initializing NVMe Controllers 00:33:18.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:18.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:18.130 Initialization complete. Launching workers. 00:33:18.130 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10054, failed: 0 00:33:18.130 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1302, failed to submit 8752 00:33:18.130 success 753, unsuccess 549, failed 0 00:33:18.130 14:27:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:18.130 14:27:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:18.130 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.409 Initializing NVMe Controllers 00:33:21.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:21.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:21.410 Initialization complete. Launching workers. 00:33:21.410 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8633, failed: 0 00:33:21.410 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 7380 00:33:21.410 success 315, unsuccess 938, failed 0 00:33:21.410 14:27:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:21.410 14:27:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:21.410 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.696 Initializing NVMe Controllers 00:33:24.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:24.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:24.696 Initialization complete. Launching workers. 00:33:24.696 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29936, failed: 0 00:33:24.696 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2691, failed to submit 27245 00:33:24.696 success 459, unsuccess 2232, failed 0 00:33:24.696 14:27:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:24.696 14:27:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.696 14:27:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:24.696 14:27:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.696 14:27:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:24.696 14:27:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.696 14:27:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2665629 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2665629 ']' 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2665629 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2665629 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2665629' 00:33:25.632 killing process with pid 2665629 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2665629 00:33:25.632 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2665629 00:33:25.893 00:33:25.893 real 0m14.415s 00:33:25.893 user 0m54.584s 00:33:25.893 sys 0m2.750s 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:25.893 ************************************ 00:33:25.893 END TEST spdk_target_abort 00:33:25.893 ************************************ 00:33:25.893 14:27:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:25.893 14:27:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:25.893 14:27:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:25.893 14:27:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:25.893 ************************************ 00:33:25.893 START TEST kernel_target_abort 00:33:25.893 ************************************ 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:25.893 14:27:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:27.270 Waiting for block devices as requested 00:33:27.531 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:33:27.531 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:27.531 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:27.790 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:27.790 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:27.790 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:28.051 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:28.051 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:28.051 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:28.051 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:28.312 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:28.312 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:28.312 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:28.312 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:28.572 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:28.572 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:28.572 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:28.832 No valid GPT data, bailing 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:28.832 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:33:29.091 00:33:29.091 Discovery Log Number of Records 2, Generation counter 2 00:33:29.091 =====Discovery Log Entry 0====== 00:33:29.091 trtype: tcp 00:33:29.091 adrfam: ipv4 00:33:29.091 subtype: current discovery subsystem 00:33:29.091 treq: not specified, sq flow control disable supported 00:33:29.091 portid: 1 00:33:29.091 trsvcid: 4420 00:33:29.091 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:29.091 traddr: 10.0.0.1 00:33:29.091 eflags: none 00:33:29.091 sectype: none 00:33:29.091 =====Discovery Log Entry 1====== 00:33:29.091 trtype: tcp 00:33:29.091 adrfam: ipv4 00:33:29.091 subtype: nvme subsystem 00:33:29.091 treq: not specified, sq flow control disable supported 00:33:29.091 portid: 1 00:33:29.091 trsvcid: 4420 00:33:29.091 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:29.091 traddr: 10.0.0.1 00:33:29.091 eflags: none 00:33:29.091 sectype: none 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:29.091 14:27:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:29.091 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.444 Initializing NVMe Controllers 00:33:32.444 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:32.444 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:32.444 Initialization complete. Launching workers. 00:33:32.444 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33465, failed: 0 00:33:32.444 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33465, failed to submit 0 00:33:32.444 success 0, unsuccess 33465, failed 0 00:33:32.444 14:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:32.444 14:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:32.444 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.734 Initializing NVMe Controllers 00:33:35.734 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:35.734 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:35.734 Initialization complete. Launching workers. 00:33:35.734 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64563, failed: 0 00:33:35.734 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16282, failed to submit 48281 00:33:35.734 success 0, unsuccess 16282, failed 0 00:33:35.734 14:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:35.734 14:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:35.734 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.018 Initializing NVMe Controllers 00:33:39.018 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:39.018 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:39.018 Initialization complete. Launching workers. 00:33:39.018 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63704, failed: 0 00:33:39.018 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15914, failed to submit 47790 00:33:39.018 success 0, unsuccess 15914, failed 0 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:39.018 14:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:39.955 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:39.955 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:39.955 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:39.955 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:39.955 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:39.955 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:39.955 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:39.955 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:39.955 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:39.955 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:39.955 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:39.955 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:39.955 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:39.955 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:39.955 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:39.955 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:40.892 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:33:41.151 00:33:41.151 real 0m15.118s 00:33:41.151 user 0m5.649s 00:33:41.151 sys 0m3.923s 00:33:41.151 14:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:41.151 14:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:41.151 ************************************ 00:33:41.151 END TEST kernel_target_abort 00:33:41.151 ************************************ 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:41.151 rmmod nvme_tcp 00:33:41.151 rmmod nvme_fabrics 00:33:41.151 rmmod nvme_keyring 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2665629 ']' 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2665629 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2665629 ']' 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2665629 00:33:41.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2665629) - No such process 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2665629 is not found' 00:33:41.151 Process with pid 2665629 is not found 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:41.151 14:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:42.533 Waiting for block devices as requested 00:33:42.533 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:33:42.792 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:42.792 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:43.052 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:43.052 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:43.052 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:43.052 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:43.311 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:43.311 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:43.311 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:43.311 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:43.570 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:43.570 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:43.570 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:43.570 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:43.829 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:43.829 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:43.829 14:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:43.829 14:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:43.829 14:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:43.829 14:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:43.829 14:28:00 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.829 14:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:43.829 14:28:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.367 14:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:46.367 00:33:46.367 real 0m40.355s 00:33:46.367 user 1m2.740s 00:33:46.367 sys 0m11.110s 00:33:46.367 14:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:46.367 14:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:46.367 ************************************ 00:33:46.367 END TEST nvmf_abort_qd_sizes 00:33:46.367 ************************************ 00:33:46.367 14:28:02 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:46.367 14:28:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:46.367 14:28:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:46.367 14:28:02 -- common/autotest_common.sh@10 -- # set +x 00:33:46.367 ************************************ 00:33:46.367 START TEST keyring_file 00:33:46.367 ************************************ 00:33:46.367 14:28:02 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:46.367 * Looking for test storage... 00:33:46.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:46.367 14:28:02 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:46.367 14:28:02 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.367 14:28:02 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.367 14:28:02 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.368 14:28:02 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.368 14:28:02 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.368 14:28:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.368 14:28:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.368 14:28:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.368 14:28:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:46.368 14:28:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NhINlLCn60 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NhINlLCn60 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NhINlLCn60 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NhINlLCn60 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0YUbKk1yxc 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:46.368 14:28:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0YUbKk1yxc 00:33:46.368 14:28:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0YUbKk1yxc 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0YUbKk1yxc 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=2671522 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:46.368 14:28:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2671522 00:33:46.368 14:28:02 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2671522 ']' 00:33:46.368 14:28:02 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.368 14:28:02 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:46.368 14:28:02 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.368 14:28:02 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:46.368 14:28:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:46.368 [2024-07-26 14:28:03.025310] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:33:46.368 [2024-07-26 14:28:03.025438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671522 ] 00:33:46.368 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.368 [2024-07-26 14:28:03.098610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.368 [2024-07-26 14:28:03.219879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.627 14:28:03 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:46.627 14:28:03 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:46.627 14:28:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:46.627 14:28:03 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.627 14:28:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:46.627 [2024-07-26 14:28:03.499373] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.886 null0 00:33:46.886 [2024-07-26 14:28:03.531441] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:46.886 [2024-07-26 14:28:03.532001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:46.886 [2024-07-26 14:28:03.539447] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.886 14:28:03 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:46.886 [2024-07-26 14:28:03.547458] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:46.886 request: 00:33:46.886 { 00:33:46.886 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.886 "secure_channel": false, 00:33:46.886 "listen_address": { 00:33:46.886 "trtype": "tcp", 00:33:46.886 "traddr": "127.0.0.1", 00:33:46.886 "trsvcid": "4420" 00:33:46.886 }, 00:33:46.886 "method": "nvmf_subsystem_add_listener", 00:33:46.886 "req_id": 1 00:33:46.886 } 00:33:46.886 Got JSON-RPC error response 00:33:46.886 response: 00:33:46.886 { 00:33:46.886 "code": -32602, 00:33:46.886 "message": "Invalid parameters" 00:33:46.886 } 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:46.886 14:28:03 keyring_file -- keyring/file.sh@46 -- # bperfpid=2671532 00:33:46.886 14:28:03 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:46.886 14:28:03 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2671532 /var/tmp/bperf.sock 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2671532 ']' 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:46.886 14:28:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:46.886 [2024-07-26 14:28:03.600103] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:33:46.886 [2024-07-26 14:28:03.600180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671532 ] 00:33:46.886 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.886 [2024-07-26 14:28:03.666882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.146 [2024-07-26 14:28:03.792181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.405 14:28:04 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:47.405 14:28:04 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:47.405 14:28:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NhINlLCn60 00:33:47.405 14:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NhINlLCn60 00:33:47.663 14:28:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0YUbKk1yxc 00:33:47.663 14:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0YUbKk1yxc 00:33:47.921 14:28:04 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:47.921 14:28:04 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:47.921 14:28:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:47.921 14:28:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:47.921 14:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:48.180 14:28:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.NhINlLCn60 == \/\t\m\p\/\t\m\p\.\N\h\I\N\l\L\C\n\6\0 ]] 00:33:48.180 14:28:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:48.180 14:28:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:48.180 14:28:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:48.180 14:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:48.180 14:28:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:48.749 14:28:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0YUbKk1yxc == \/\t\m\p\/\t\m\p\.\0\Y\U\b\K\k\1\y\x\c ]] 00:33:48.749 14:28:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:48.749 14:28:05 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:48.749 14:28:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:48.749 14:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:49.041 14:28:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:49.041 14:28:05 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:49.041 14:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:49.608 [2024-07-26 14:28:06.193300] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:49.608 nvme0n1 00:33:49.608 14:28:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:49.608 14:28:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:49.608 14:28:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:49.608 14:28:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:49.608 14:28:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:49.608 14:28:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:49.867 14:28:06 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:49.868 14:28:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:49.868 14:28:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:49.868 14:28:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:49.868 14:28:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:49.868 14:28:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:49.868 14:28:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:50.436 14:28:07 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:50.436 14:28:07 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:50.436 Running I/O for 1 seconds... 00:33:51.813 00:33:51.813 Latency(us) 00:33:51.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.813 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:51.813 nvme0n1 : 1.03 5188.13 20.27 0.00 0.00 24373.03 7233.23 32816.55 00:33:51.813 =================================================================================================================== 00:33:51.813 Total : 5188.13 20.27 0.00 0.00 24373.03 7233.23 32816.55 00:33:51.813 0 00:33:51.813 14:28:08 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:51.813 14:28:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:52.071 14:28:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:52.071 14:28:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:52.071 14:28:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:52.071 14:28:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:52.071 14:28:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:52.071 14:28:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.330 14:28:09 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:52.330 14:28:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:52.330 14:28:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:52.330 14:28:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:52.330 14:28:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:52.330 14:28:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.330 14:28:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:52.588 14:28:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:52.588 14:28:09 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:52.588 14:28:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:52.588 14:28:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:52.588 14:28:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:52.588 14:28:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:52.588 14:28:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:52.588 14:28:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:52.588 14:28:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:52.588 14:28:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:53.154 [2024-07-26 14:28:09.772912] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:53.154 [2024-07-26 14:28:09.773251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18328a0 (107): Transport endpoint is not connected 00:33:53.154 [2024-07-26 14:28:09.774243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18328a0 (9): Bad file descriptor 00:33:53.154 [2024-07-26 14:28:09.775241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:53.154 [2024-07-26 14:28:09.775265] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:53.154 [2024-07-26 14:28:09.775282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:53.154 request: 00:33:53.154 { 00:33:53.154 "name": "nvme0", 00:33:53.154 "trtype": "tcp", 00:33:53.154 "traddr": "127.0.0.1", 00:33:53.154 "adrfam": "ipv4", 00:33:53.154 "trsvcid": "4420", 00:33:53.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:53.154 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:53.154 "prchk_reftag": false, 00:33:53.154 "prchk_guard": false, 00:33:53.154 "hdgst": false, 00:33:53.154 "ddgst": false, 00:33:53.154 "psk": "key1", 00:33:53.154 "method": "bdev_nvme_attach_controller", 00:33:53.154 "req_id": 1 00:33:53.154 } 00:33:53.154 Got JSON-RPC error response 00:33:53.154 response: 00:33:53.154 { 00:33:53.154 "code": -5, 00:33:53.154 "message": "Input/output error" 00:33:53.154 } 00:33:53.154 14:28:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:53.154 14:28:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:53.154 14:28:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:53.154 14:28:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:53.154 14:28:09 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:53.154 14:28:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:53.154 14:28:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:53.154 14:28:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:53.154 14:28:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:53.154 14:28:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:53.412 14:28:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:53.412 14:28:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:53.412 14:28:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:53.412 14:28:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:53.412 14:28:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:53.412 14:28:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:53.412 14:28:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:53.671 14:28:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:53.671 14:28:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:53.671 14:28:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:54.238 14:28:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:54.238 14:28:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:54.497 14:28:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:54.497 14:28:11 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:54.497 14:28:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.065 14:28:11 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:55.065 14:28:11 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.NhINlLCn60 00:33:55.065 14:28:11 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NhINlLCn60 00:33:55.065 14:28:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:55.065 14:28:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NhINlLCn60 00:33:55.065 14:28:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:55.065 14:28:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:55.065 14:28:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:55.065 14:28:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:55.065 14:28:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NhINlLCn60 00:33:55.065 14:28:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NhINlLCn60 00:33:55.323 [2024-07-26 14:28:11.976823] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NhINlLCn60': 0100660 00:33:55.323 [2024-07-26 14:28:11.976861] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:55.323 request: 00:33:55.323 { 00:33:55.323 "name": "key0", 00:33:55.323 "path": "/tmp/tmp.NhINlLCn60", 00:33:55.323 "method": "keyring_file_add_key", 00:33:55.323 "req_id": 1 00:33:55.323 } 00:33:55.323 Got JSON-RPC error response 00:33:55.323 response: 00:33:55.323 { 00:33:55.323 "code": -1, 00:33:55.323 "message": "Operation not permitted" 00:33:55.323 } 00:33:55.323 14:28:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:55.323 14:28:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:55.323 14:28:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:55.323 14:28:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:55.323 14:28:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.NhINlLCn60 00:33:55.323 14:28:12 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NhINlLCn60 00:33:55.323 14:28:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NhINlLCn60 00:33:55.582 14:28:12 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.NhINlLCn60 00:33:55.582 14:28:12 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:55.582 14:28:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:55.582 14:28:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:55.582 14:28:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:55.582 14:28:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:55.582 14:28:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.842 14:28:12 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:55.842 14:28:12 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:55.842 14:28:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:55.842 14:28:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:55.842 14:28:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:55.842 14:28:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:55.842 14:28:12 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:55.842 14:28:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:55.842 14:28:12 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:55.842 14:28:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:56.103 [2024-07-26 14:28:12.899309] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NhINlLCn60': No such file or directory 00:33:56.103 [2024-07-26 14:28:12.899355] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:56.103 [2024-07-26 14:28:12.899387] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:56.103 [2024-07-26 14:28:12.899400] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:56.103 [2024-07-26 14:28:12.899414] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:56.103 request: 00:33:56.103 { 00:33:56.103 "name": "nvme0", 00:33:56.103 "trtype": "tcp", 00:33:56.103 "traddr": "127.0.0.1", 00:33:56.103 "adrfam": "ipv4", 00:33:56.103 "trsvcid": "4420", 00:33:56.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.103 "prchk_reftag": false, 00:33:56.103 "prchk_guard": false, 00:33:56.103 "hdgst": false, 00:33:56.103 "ddgst": false, 00:33:56.103 "psk": "key0", 00:33:56.103 "method": "bdev_nvme_attach_controller", 00:33:56.103 "req_id": 1 00:33:56.103 } 00:33:56.103 Got JSON-RPC error response 00:33:56.103 response: 00:33:56.103 { 00:33:56.103 "code": -19, 00:33:56.103 "message": "No such device" 00:33:56.103 } 00:33:56.103 14:28:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:56.103 14:28:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:56.103 14:28:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:56.103 14:28:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:56.103 14:28:12 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:56.103 14:28:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:56.675 14:28:13 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:56.675 14:28:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:56.675 14:28:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:56.675 14:28:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:56.675 14:28:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:56.676 14:28:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:56.676 14:28:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.52oJTWcgGC 00:33:56.676 14:28:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:56.676 14:28:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:56.676 14:28:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:56.676 14:28:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:56.676 14:28:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:56.676 14:28:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:56.676 14:28:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:56.936 14:28:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.52oJTWcgGC 00:33:56.936 14:28:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.52oJTWcgGC 00:33:56.936 14:28:13 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.52oJTWcgGC 00:33:56.936 14:28:13 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.52oJTWcgGC 00:33:56.936 14:28:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.52oJTWcgGC 00:33:57.505 14:28:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:57.505 14:28:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:58.073 nvme0n1 00:33:58.073 14:28:14 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:58.073 14:28:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:58.073 14:28:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:58.073 14:28:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:58.073 14:28:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:58.073 14:28:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:58.641 14:28:15 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:58.641 14:28:15 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:58.641 14:28:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:58.899 14:28:15 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:58.899 14:28:15 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:58.899 14:28:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:58.899 14:28:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:58.899 14:28:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:59.158 14:28:15 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:59.158 14:28:15 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:59.158 14:28:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:59.158 14:28:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:59.158 14:28:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:59.158 14:28:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:59.158 14:28:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:59.416 14:28:16 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:59.416 14:28:16 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:59.416 14:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:59.674 14:28:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:59.674 14:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:59.674 14:28:16 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:59.932 14:28:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:59.932 14:28:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.52oJTWcgGC 00:33:59.932 14:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.52oJTWcgGC 00:34:00.191 14:28:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0YUbKk1yxc 00:34:00.191 14:28:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0YUbKk1yxc 00:34:00.451 14:28:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:00.451 14:28:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:01.020 nvme0n1 00:34:01.279 14:28:17 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:34:01.279 14:28:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:01.537 14:28:18 keyring_file -- keyring/file.sh@112 -- # config='{ 00:34:01.537 "subsystems": [ 00:34:01.537 { 00:34:01.537 "subsystem": "keyring", 00:34:01.537 "config": [ 00:34:01.537 { 00:34:01.537 "method": "keyring_file_add_key", 00:34:01.537 "params": { 00:34:01.537 "name": "key0", 00:34:01.537 "path": "/tmp/tmp.52oJTWcgGC" 00:34:01.537 } 00:34:01.537 }, 00:34:01.537 { 00:34:01.537 "method": "keyring_file_add_key", 00:34:01.537 "params": { 00:34:01.537 "name": "key1", 00:34:01.537 "path": "/tmp/tmp.0YUbKk1yxc" 00:34:01.537 } 00:34:01.537 } 00:34:01.537 ] 00:34:01.537 }, 00:34:01.537 { 00:34:01.537 "subsystem": "iobuf", 00:34:01.537 "config": [ 00:34:01.537 { 00:34:01.537 "method": "iobuf_set_options", 00:34:01.537 "params": { 00:34:01.537 "small_pool_count": 8192, 00:34:01.537 "large_pool_count": 1024, 00:34:01.537 "small_bufsize": 8192, 00:34:01.537 "large_bufsize": 135168 00:34:01.537 } 00:34:01.537 } 00:34:01.537 ] 00:34:01.537 }, 00:34:01.537 { 00:34:01.537 "subsystem": "sock", 00:34:01.537 "config": [ 00:34:01.537 { 00:34:01.537 "method": "sock_set_default_impl", 00:34:01.537 "params": { 00:34:01.537 "impl_name": "posix" 00:34:01.537 } 00:34:01.537 }, 00:34:01.537 { 00:34:01.538 "method": "sock_impl_set_options", 00:34:01.538 "params": { 00:34:01.538 "impl_name": "ssl", 00:34:01.538 "recv_buf_size": 4096, 00:34:01.538 "send_buf_size": 4096, 00:34:01.538 "enable_recv_pipe": true, 00:34:01.538 "enable_quickack": false, 00:34:01.538 "enable_placement_id": 0, 00:34:01.538 "enable_zerocopy_send_server": true, 00:34:01.538 "enable_zerocopy_send_client": false, 00:34:01.538 "zerocopy_threshold": 0, 00:34:01.538 "tls_version": 0, 00:34:01.538 "enable_ktls": false 00:34:01.538 } 00:34:01.538 }, 00:34:01.538 { 00:34:01.538 "method": "sock_impl_set_options", 00:34:01.538 "params": { 00:34:01.538 "impl_name": "posix", 00:34:01.538 "recv_buf_size": 2097152, 00:34:01.538 "send_buf_size": 2097152, 00:34:01.538 "enable_recv_pipe": true, 00:34:01.538 "enable_quickack": false, 00:34:01.538 "enable_placement_id": 0, 00:34:01.538 "enable_zerocopy_send_server": true, 00:34:01.538 "enable_zerocopy_send_client": false, 00:34:01.538 "zerocopy_threshold": 0, 00:34:01.538 "tls_version": 0, 00:34:01.538 "enable_ktls": false 00:34:01.538 } 00:34:01.538 } 00:34:01.538 ] 00:34:01.538 }, 00:34:01.538 { 00:34:01.538 "subsystem": "vmd", 00:34:01.538 "config": [] 00:34:01.538 }, 00:34:01.538 { 00:34:01.538 "subsystem": "accel", 00:34:01.538 "config": [ 00:34:01.538 { 00:34:01.538 "method": "accel_set_options", 00:34:01.538 "params": { 00:34:01.538 "small_cache_size": 128, 00:34:01.538 "large_cache_size": 16, 00:34:01.538 "task_count": 2048, 00:34:01.538 "sequence_count": 2048, 00:34:01.538 "buf_count": 2048 00:34:01.538 } 00:34:01.538 } 00:34:01.538 ] 00:34:01.538 }, 00:34:01.538 { 00:34:01.538 "subsystem": "bdev", 00:34:01.538 "config": [ 00:34:01.538 { 00:34:01.538 "method": "bdev_set_options", 00:34:01.538 "params": { 00:34:01.538 "bdev_io_pool_size": 65535, 00:34:01.538 "bdev_io_cache_size": 256, 00:34:01.538 "bdev_auto_examine": true, 00:34:01.538 "iobuf_small_cache_size": 128, 00:34:01.538 "iobuf_large_cache_size": 16 00:34:01.538 } 00:34:01.538 }, 00:34:01.538 { 00:34:01.538 "method": "bdev_raid_set_options", 00:34:01.538 "params": { 00:34:01.538 "process_window_size_kb": 1024, 00:34:01.538 "process_max_bandwidth_mb_sec": 0 00:34:01.538 } 00:34:01.538 }, 00:34:01.538 { 00:34:01.538 "method": "bdev_iscsi_set_options", 00:34:01.538 "params": { 00:34:01.538 "timeout_sec": 30 00:34:01.538 } 00:34:01.538 }, 00:34:01.538 { 00:34:01.538 "method": "bdev_nvme_set_options", 00:34:01.538 "params": { 00:34:01.538 "action_on_timeout": "none", 00:34:01.538 "timeout_us": 0, 00:34:01.538 "timeout_admin_us": 0, 00:34:01.538 "keep_alive_timeout_ms": 10000, 00:34:01.538 "arbitration_burst": 0, 00:34:01.538 "low_priority_weight": 0, 00:34:01.538 "medium_priority_weight": 0, 00:34:01.538 "high_priority_weight": 0, 00:34:01.538 "nvme_adminq_poll_period_us": 10000, 00:34:01.538 "nvme_ioq_poll_period_us": 0, 00:34:01.538 "io_queue_requests": 512, 00:34:01.538 "delay_cmd_submit": true, 00:34:01.538 "transport_retry_count": 4, 00:34:01.538 "bdev_retry_count": 3, 00:34:01.538 "transport_ack_timeout": 0, 00:34:01.538 "ctrlr_loss_timeout_sec": 0, 00:34:01.538 "reconnect_delay_sec": 0, 00:34:01.538 "fast_io_fail_timeout_sec": 0, 00:34:01.538 "disable_auto_failback": false, 00:34:01.538 "generate_uuids": false, 00:34:01.538 "transport_tos": 0, 00:34:01.538 "nvme_error_stat": false, 00:34:01.538 "rdma_srq_size": 0, 00:34:01.538 "io_path_stat": false, 00:34:01.538 "allow_accel_sequence": false, 00:34:01.538 "rdma_max_cq_size": 0, 00:34:01.538 "rdma_cm_event_timeout_ms": 0, 00:34:01.538 "dhchap_digests": [ 00:34:01.538 "sha256", 00:34:01.538 "sha384", 00:34:01.538 "sha512" 00:34:01.538 ], 00:34:01.538 "dhchap_dhgroups": [ 00:34:01.538 "null", 00:34:01.538 "ffdhe2048", 00:34:01.538 "ffdhe3072", 00:34:01.538 "ffdhe4096", 00:34:01.538 "ffdhe6144", 00:34:01.538 "ffdhe8192" 00:34:01.538 ] 00:34:01.538 } 00:34:01.538 }, 00:34:01.538 { 00:34:01.538 "method": "bdev_nvme_attach_controller", 00:34:01.538 "params": { 00:34:01.538 "name": "nvme0", 00:34:01.538 "trtype": "TCP", 00:34:01.538 "adrfam": "IPv4", 00:34:01.538 "traddr": "127.0.0.1", 00:34:01.538 "trsvcid": "4420", 00:34:01.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.538 "prchk_reftag": false, 00:34:01.538 "prchk_guard": false, 00:34:01.538 "ctrlr_loss_timeout_sec": 0, 00:34:01.538 "reconnect_delay_sec": 0, 00:34:01.538 "fast_io_fail_timeout_sec": 0, 00:34:01.538 "psk": "key0", 00:34:01.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:01.538 "hdgst": false, 00:34:01.538 "ddgst": false 00:34:01.538 } 00:34:01.538 }, 00:34:01.538 { 00:34:01.539 "method": "bdev_nvme_set_hotplug", 00:34:01.539 "params": { 00:34:01.539 "period_us": 100000, 00:34:01.539 "enable": false 00:34:01.539 } 00:34:01.539 }, 00:34:01.539 { 00:34:01.539 "method": "bdev_wait_for_examine" 00:34:01.539 } 00:34:01.539 ] 00:34:01.539 }, 00:34:01.539 { 00:34:01.539 "subsystem": "nbd", 00:34:01.539 "config": [] 00:34:01.539 } 00:34:01.539 ] 00:34:01.539 }' 00:34:01.539 14:28:18 keyring_file -- keyring/file.sh@114 -- # killprocess 2671532 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2671532 ']' 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2671532 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@955 -- # uname 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2671532 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2671532' 00:34:01.539 killing process with pid 2671532 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@969 -- # kill 2671532 00:34:01.539 Received shutdown signal, test time was about 1.000000 seconds 00:34:01.539 00:34:01.539 Latency(us) 00:34:01.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.539 =================================================================================================================== 00:34:01.539 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:01.539 14:28:18 keyring_file -- common/autotest_common.sh@974 -- # wait 2671532 00:34:01.798 14:28:18 keyring_file -- keyring/file.sh@117 -- # bperfpid=2673394 00:34:01.798 14:28:18 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2673394 /var/tmp/bperf.sock 00:34:01.798 14:28:18 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2673394 ']' 00:34:01.798 14:28:18 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:01.798 14:28:18 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:01.798 14:28:18 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:01.798 14:28:18 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:01.798 14:28:18 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:01.798 14:28:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:01.798 14:28:18 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:34:01.798 "subsystems": [ 00:34:01.798 { 00:34:01.798 "subsystem": "keyring", 00:34:01.798 "config": [ 00:34:01.798 { 00:34:01.798 "method": "keyring_file_add_key", 00:34:01.798 "params": { 00:34:01.798 "name": "key0", 00:34:01.798 "path": "/tmp/tmp.52oJTWcgGC" 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "keyring_file_add_key", 00:34:01.798 "params": { 00:34:01.798 "name": "key1", 00:34:01.798 "path": "/tmp/tmp.0YUbKk1yxc" 00:34:01.798 } 00:34:01.798 } 00:34:01.798 ] 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "subsystem": "iobuf", 00:34:01.798 "config": [ 00:34:01.798 { 00:34:01.798 "method": "iobuf_set_options", 00:34:01.798 "params": { 00:34:01.798 "small_pool_count": 8192, 00:34:01.798 "large_pool_count": 1024, 00:34:01.798 "small_bufsize": 8192, 00:34:01.798 "large_bufsize": 135168 00:34:01.798 } 00:34:01.798 } 00:34:01.798 ] 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "subsystem": "sock", 00:34:01.798 "config": [ 00:34:01.798 { 00:34:01.798 "method": "sock_set_default_impl", 00:34:01.798 "params": { 00:34:01.798 "impl_name": "posix" 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "sock_impl_set_options", 00:34:01.798 "params": { 00:34:01.798 "impl_name": "ssl", 00:34:01.798 "recv_buf_size": 4096, 00:34:01.798 "send_buf_size": 4096, 00:34:01.798 "enable_recv_pipe": true, 00:34:01.798 "enable_quickack": false, 00:34:01.798 "enable_placement_id": 0, 00:34:01.798 "enable_zerocopy_send_server": true, 00:34:01.798 "enable_zerocopy_send_client": false, 00:34:01.798 "zerocopy_threshold": 0, 00:34:01.798 "tls_version": 0, 00:34:01.798 "enable_ktls": false 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "sock_impl_set_options", 00:34:01.798 "params": { 00:34:01.798 "impl_name": "posix", 00:34:01.798 "recv_buf_size": 2097152, 00:34:01.798 "send_buf_size": 2097152, 00:34:01.798 "enable_recv_pipe": true, 00:34:01.798 "enable_quickack": false, 00:34:01.798 "enable_placement_id": 0, 00:34:01.798 "enable_zerocopy_send_server": true, 00:34:01.798 "enable_zerocopy_send_client": false, 00:34:01.798 "zerocopy_threshold": 0, 00:34:01.798 "tls_version": 0, 00:34:01.798 "enable_ktls": false 00:34:01.798 } 00:34:01.798 } 00:34:01.798 ] 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "subsystem": "vmd", 00:34:01.798 "config": [] 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "subsystem": "accel", 00:34:01.798 "config": [ 00:34:01.798 { 00:34:01.798 "method": "accel_set_options", 00:34:01.798 "params": { 00:34:01.798 "small_cache_size": 128, 00:34:01.798 "large_cache_size": 16, 00:34:01.798 "task_count": 2048, 00:34:01.798 "sequence_count": 2048, 00:34:01.798 "buf_count": 2048 00:34:01.798 } 00:34:01.798 } 00:34:01.798 ] 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "subsystem": "bdev", 00:34:01.798 "config": [ 00:34:01.798 { 00:34:01.798 "method": "bdev_set_options", 00:34:01.798 "params": { 00:34:01.798 "bdev_io_pool_size": 65535, 00:34:01.798 "bdev_io_cache_size": 256, 00:34:01.798 "bdev_auto_examine": true, 00:34:01.798 "iobuf_small_cache_size": 128, 00:34:01.798 "iobuf_large_cache_size": 16 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "bdev_raid_set_options", 00:34:01.798 "params": { 00:34:01.798 "process_window_size_kb": 1024, 00:34:01.798 "process_max_bandwidth_mb_sec": 0 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "bdev_iscsi_set_options", 00:34:01.798 "params": { 00:34:01.798 "timeout_sec": 30 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "bdev_nvme_set_options", 00:34:01.798 "params": { 00:34:01.798 "action_on_timeout": "none", 00:34:01.798 "timeout_us": 0, 00:34:01.798 "timeout_admin_us": 0, 00:34:01.798 "keep_alive_timeout_ms": 10000, 00:34:01.798 "arbitration_burst": 0, 00:34:01.798 "low_priority_weight": 0, 00:34:01.798 "medium_priority_weight": 0, 00:34:01.798 "high_priority_weight": 0, 00:34:01.798 "nvme_adminq_poll_period_us": 10000, 00:34:01.798 "nvme_ioq_poll_period_us": 0, 00:34:01.798 "io_queue_requests": 512, 00:34:01.798 "delay_cmd_submit": true, 00:34:01.798 "transport_retry_count": 4, 00:34:01.798 "bdev_retry_count": 3, 00:34:01.798 "transport_ack_timeout": 0, 00:34:01.798 "ctrlr_loss_timeout_sec": 0, 00:34:01.798 "reconnect_delay_sec": 0, 00:34:01.798 "fast_io_fail_timeout_sec": 0, 00:34:01.798 "disable_auto_failback": false, 00:34:01.798 "generate_uuids": false, 00:34:01.798 "transport_tos": 0, 00:34:01.798 "nvme_error_stat": false, 00:34:01.798 "rdma_srq_size": 0, 00:34:01.798 "io_path_stat": false, 00:34:01.798 "allow_accel_sequence": false, 00:34:01.798 "rdma_max_cq_size": 0, 00:34:01.798 "rdma_cm_event_timeout_ms": 0, 00:34:01.798 "dhchap_digests": [ 00:34:01.798 "sha256", 00:34:01.798 "sha384", 00:34:01.798 "sha512" 00:34:01.798 ], 00:34:01.798 "dhchap_dhgroups": [ 00:34:01.798 "null", 00:34:01.798 "ffdhe2048", 00:34:01.798 "ffdhe3072", 00:34:01.798 "ffdhe4096", 00:34:01.798 "ffdhe6144", 00:34:01.798 "ffdhe8192" 00:34:01.798 ] 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "bdev_nvme_attach_controller", 00:34:01.798 "params": { 00:34:01.798 "name": "nvme0", 00:34:01.798 "trtype": "TCP", 00:34:01.798 "adrfam": "IPv4", 00:34:01.798 "traddr": "127.0.0.1", 00:34:01.798 "trsvcid": "4420", 00:34:01.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.798 "prchk_reftag": false, 00:34:01.798 "prchk_guard": false, 00:34:01.798 "ctrlr_loss_timeout_sec": 0, 00:34:01.798 "reconnect_delay_sec": 0, 00:34:01.798 "fast_io_fail_timeout_sec": 0, 00:34:01.798 "psk": "key0", 00:34:01.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:01.798 "hdgst": false, 00:34:01.798 "ddgst": false 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "bdev_nvme_set_hotplug", 00:34:01.798 "params": { 00:34:01.798 "period_us": 100000, 00:34:01.798 "enable": false 00:34:01.798 } 00:34:01.798 }, 00:34:01.798 { 00:34:01.798 "method": "bdev_wait_for_examine" 00:34:01.798 } 00:34:01.799 ] 00:34:01.799 }, 00:34:01.799 { 00:34:01.799 "subsystem": "nbd", 00:34:01.799 "config": [] 00:34:01.799 } 00:34:01.799 ] 00:34:01.799 }' 00:34:01.799 [2024-07-26 14:28:18.608392] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:34:01.799 [2024-07-26 14:28:18.608485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673394 ] 00:34:01.799 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.799 [2024-07-26 14:28:18.669813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.057 [2024-07-26 14:28:18.792460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.351 [2024-07-26 14:28:18.981226] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:02.919 14:28:19 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:02.919 14:28:19 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:34:02.919 14:28:19 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:34:02.919 14:28:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:02.919 14:28:19 keyring_file -- keyring/file.sh@120 -- # jq length 00:34:03.177 14:28:19 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:34:03.177 14:28:19 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:34:03.177 14:28:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:03.177 14:28:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:03.177 14:28:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:03.177 14:28:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:03.177 14:28:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:03.435 14:28:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:03.436 14:28:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:34:03.436 14:28:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:03.436 14:28:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:03.436 14:28:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:03.436 14:28:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:03.436 14:28:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:03.694 14:28:20 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:34:03.694 14:28:20 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:34:03.694 14:28:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:03.694 14:28:20 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:34:03.952 14:28:20 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:34:03.952 14:28:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:03.952 14:28:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.52oJTWcgGC /tmp/tmp.0YUbKk1yxc 00:34:03.952 14:28:20 keyring_file -- keyring/file.sh@20 -- # killprocess 2673394 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2673394 ']' 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2673394 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@955 -- # uname 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2673394 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2673394' 00:34:03.952 killing process with pid 2673394 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@969 -- # kill 2673394 00:34:03.952 Received shutdown signal, test time was about 1.000000 seconds 00:34:03.952 00:34:03.952 Latency(us) 00:34:03.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.952 =================================================================================================================== 00:34:03.952 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:03.952 14:28:20 keyring_file -- common/autotest_common.sh@974 -- # wait 2673394 00:34:04.212 14:28:21 keyring_file -- keyring/file.sh@21 -- # killprocess 2671522 00:34:04.212 14:28:21 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2671522 ']' 00:34:04.212 14:28:21 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2671522 00:34:04.212 14:28:21 keyring_file -- common/autotest_common.sh@955 -- # uname 00:34:04.212 14:28:21 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:04.212 14:28:21 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2671522 00:34:04.471 14:28:21 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:04.471 14:28:21 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:04.471 14:28:21 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2671522' 00:34:04.471 killing process with pid 2671522 00:34:04.471 14:28:21 keyring_file -- common/autotest_common.sh@969 -- # kill 2671522 00:34:04.471 [2024-07-26 14:28:21.105779] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:04.471 14:28:21 keyring_file -- common/autotest_common.sh@974 -- # wait 2671522 00:34:04.731 00:34:04.731 real 0m18.835s 00:34:04.731 user 0m48.097s 00:34:04.731 sys 0m3.998s 00:34:04.731 14:28:21 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:04.731 14:28:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:04.731 ************************************ 00:34:04.731 END TEST keyring_file 00:34:04.731 ************************************ 00:34:04.990 14:28:21 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:34:04.990 14:28:21 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:04.990 14:28:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:04.990 14:28:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:04.990 14:28:21 -- common/autotest_common.sh@10 -- # set +x 00:34:04.990 ************************************ 00:34:04.990 START TEST keyring_linux 00:34:04.990 ************************************ 00:34:04.990 14:28:21 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:04.990 * Looking for test storage... 00:34:04.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:04.990 14:28:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:04.990 14:28:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.990 14:28:21 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.990 14:28:21 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.990 14:28:21 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.990 14:28:21 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.990 14:28:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.991 14:28:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.991 14:28:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.991 14:28:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:04.991 14:28:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:04.991 14:28:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:04.991 14:28:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:04.991 14:28:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:04.991 14:28:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:04.991 14:28:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:04.991 14:28:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@705 -- # python - 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:04.991 /tmp/:spdk-test:key0 00:34:04.991 14:28:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:04.991 14:28:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:34:04.991 14:28:21 keyring_linux -- nvmf/common.sh@705 -- # python - 00:34:05.251 14:28:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:05.251 14:28:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:05.251 /tmp/:spdk-test:key1 00:34:05.251 14:28:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2673889 00:34:05.251 14:28:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:05.251 14:28:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2673889 00:34:05.251 14:28:21 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2673889 ']' 00:34:05.251 14:28:21 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.251 14:28:21 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:05.251 14:28:21 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.251 14:28:21 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:05.251 14:28:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:05.251 [2024-07-26 14:28:21.984815] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:34:05.251 [2024-07-26 14:28:21.984927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673889 ] 00:34:05.251 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.251 [2024-07-26 14:28:22.056319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.511 [2024-07-26 14:28:22.181992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:34:05.770 14:28:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:05.770 [2024-07-26 14:28:22.463943] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.770 null0 00:34:05.770 [2024-07-26 14:28:22.495989] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:05.770 [2024-07-26 14:28:22.496539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.770 14:28:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:05.770 40620091 00:34:05.770 14:28:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:05.770 801833560 00:34:05.770 14:28:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2673957 00:34:05.770 14:28:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:05.770 14:28:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2673957 /var/tmp/bperf.sock 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2673957 ']' 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:05.770 14:28:22 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:05.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:05.771 14:28:22 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:05.771 14:28:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:05.771 [2024-07-26 14:28:22.572812] Starting SPDK v24.09-pre git sha1 dcc54343a / DPDK 24.03.0 initialization... 00:34:05.771 [2024-07-26 14:28:22.572917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673957 ] 00:34:05.771 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.771 [2024-07-26 14:28:22.649425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.029 [2024-07-26 14:28:22.772242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.029 14:28:22 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:06.029 14:28:22 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:34:06.029 14:28:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:06.029 14:28:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:06.597 14:28:23 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:06.597 14:28:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:07.533 14:28:24 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:07.533 14:28:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:07.533 [2024-07-26 14:28:24.393866] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:07.791 nvme0n1 00:34:07.791 14:28:24 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:07.791 14:28:24 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:07.791 14:28:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:07.791 14:28:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:07.791 14:28:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:07.791 14:28:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.049 14:28:24 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:08.049 14:28:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:08.049 14:28:24 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:08.049 14:28:24 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:08.049 14:28:24 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:08.049 14:28:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.049 14:28:24 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:08.308 14:28:25 keyring_linux -- keyring/linux.sh@25 -- # sn=40620091 00:34:08.308 14:28:25 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:08.308 14:28:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:08.308 14:28:25 keyring_linux -- keyring/linux.sh@26 -- # [[ 40620091 == \4\0\6\2\0\0\9\1 ]] 00:34:08.308 14:28:25 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 40620091 00:34:08.308 14:28:25 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:08.308 14:28:25 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:08.308 Running I/O for 1 seconds... 00:34:09.684 00:34:09.684 Latency(us) 00:34:09.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.684 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:09.684 nvme0n1 : 1.02 4857.14 18.97 0.00 0.00 26121.48 9029.40 34952.53 00:34:09.684 =================================================================================================================== 00:34:09.684 Total : 4857.14 18.97 0.00 0.00 26121.48 9029.40 34952.53 00:34:09.684 0 00:34:09.684 14:28:26 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:09.684 14:28:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:09.684 14:28:26 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:09.684 14:28:26 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:09.684 14:28:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:09.684 14:28:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:09.684 14:28:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:09.684 14:28:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.942 14:28:26 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:09.942 14:28:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:09.942 14:28:26 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:09.942 14:28:26 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:09.942 14:28:26 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:34:09.942 14:28:26 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:09.942 14:28:26 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:09.942 14:28:26 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:09.942 14:28:26 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:09.942 14:28:26 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:09.942 14:28:26 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:09.942 14:28:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:10.202 [2024-07-26 14:28:27.031301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:10.202 [2024-07-26 14:28:27.032141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2092fe0 (107): Transport endpoint is not connected 00:34:10.202 [2024-07-26 14:28:27.033131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2092fe0 (9): Bad file descriptor 00:34:10.202 [2024-07-26 14:28:27.034130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:10.202 [2024-07-26 14:28:27.034153] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:10.202 [2024-07-26 14:28:27.034169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:10.202 request: 00:34:10.202 { 00:34:10.202 "name": "nvme0", 00:34:10.202 "trtype": "tcp", 00:34:10.202 "traddr": "127.0.0.1", 00:34:10.202 "adrfam": "ipv4", 00:34:10.202 "trsvcid": "4420", 00:34:10.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.202 "prchk_reftag": false, 00:34:10.202 "prchk_guard": false, 00:34:10.202 "hdgst": false, 00:34:10.202 "ddgst": false, 00:34:10.202 "psk": ":spdk-test:key1", 00:34:10.202 "method": "bdev_nvme_attach_controller", 00:34:10.202 "req_id": 1 00:34:10.202 } 00:34:10.202 Got JSON-RPC error response 00:34:10.202 response: 00:34:10.202 { 00:34:10.202 "code": -5, 00:34:10.202 "message": "Input/output error" 00:34:10.202 } 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@33 -- # sn=40620091 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 40620091 00:34:10.202 1 links removed 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@33 -- # sn=801833560 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 801833560 00:34:10.202 1 links removed 00:34:10.202 14:28:27 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2673957 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2673957 ']' 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2673957 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:10.202 14:28:27 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2673957 00:34:10.460 14:28:27 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:10.460 14:28:27 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:10.460 14:28:27 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2673957' 00:34:10.460 killing process with pid 2673957 00:34:10.460 14:28:27 keyring_linux -- common/autotest_common.sh@969 -- # kill 2673957 00:34:10.460 Received shutdown signal, test time was about 1.000000 seconds 00:34:10.460 00:34:10.460 Latency(us) 00:34:10.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.460 =================================================================================================================== 00:34:10.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:10.460 14:28:27 keyring_linux -- common/autotest_common.sh@974 -- # wait 2673957 00:34:10.718 14:28:27 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2673889 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2673889 ']' 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2673889 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2673889 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2673889' 00:34:10.718 killing process with pid 2673889 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@969 -- # kill 2673889 00:34:10.718 14:28:27 keyring_linux -- common/autotest_common.sh@974 -- # wait 2673889 00:34:11.285 00:34:11.285 real 0m6.292s 00:34:11.285 user 0m12.617s 00:34:11.285 sys 0m1.752s 00:34:11.285 14:28:27 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:11.285 14:28:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:11.285 ************************************ 00:34:11.285 END TEST keyring_linux 00:34:11.285 ************************************ 00:34:11.285 14:28:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:34:11.285 14:28:27 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:11.285 14:28:27 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:11.285 14:28:27 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:34:11.285 14:28:27 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:34:11.285 14:28:27 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:34:11.285 14:28:27 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:34:11.285 14:28:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:11.285 14:28:27 -- common/autotest_common.sh@10 -- # set +x 00:34:11.285 14:28:27 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:34:11.285 14:28:27 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:11.285 14:28:27 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:11.285 14:28:27 -- common/autotest_common.sh@10 -- # set +x 00:34:13.188 INFO: APP EXITING 00:34:13.188 INFO: killing all VMs 00:34:13.188 INFO: killing vhost app 00:34:13.188 INFO: EXIT DONE 00:34:15.091 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:34:15.091 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:34:15.091 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:34:15.091 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:34:15.091 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:34:15.091 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:34:15.091 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:34:15.091 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:34:15.091 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:34:15.091 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:34:15.091 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:34:15.091 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:34:15.091 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:34:15.091 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:34:15.091 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:34:15.091 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:34:15.091 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:34:16.998 Cleaning 00:34:16.998 Removing: /var/run/dpdk/spdk0/config 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:16.998 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:16.998 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:16.998 Removing: /var/run/dpdk/spdk1/config 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:16.998 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:16.998 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:16.998 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:16.998 Removing: /var/run/dpdk/spdk2/config 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:16.998 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:16.998 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:16.998 Removing: /var/run/dpdk/spdk3/config 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:16.998 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:16.998 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:16.998 Removing: /var/run/dpdk/spdk4/config 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:16.998 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:16.998 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:16.998 Removing: /dev/shm/bdev_svc_trace.1 00:34:16.998 Removing: /dev/shm/nvmf_trace.0 00:34:16.998 Removing: /dev/shm/spdk_tgt_trace.pid2398154 00:34:16.998 Removing: /var/run/dpdk/spdk0 00:34:16.998 Removing: /var/run/dpdk/spdk1 00:34:16.998 Removing: /var/run/dpdk/spdk2 00:34:16.998 Removing: /var/run/dpdk/spdk3 00:34:16.998 Removing: /var/run/dpdk/spdk4 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2396480 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2397216 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2398154 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2398593 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2399280 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2399426 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2400140 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2400152 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2400448 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2402017 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2403029 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2403352 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2403655 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2403861 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2404055 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2404338 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2404493 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2404671 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2405258 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2408140 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2408426 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2408588 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2408670 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2409220 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2409256 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2409830 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2409841 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2410381 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2410512 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2410927 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2410951 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2411441 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2411598 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2411911 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2414046 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2416914 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2424238 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2424670 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2427199 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2427467 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2430270 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2434153 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2436587 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2443405 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2449302 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2450598 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2451262 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2462327 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2464732 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2492486 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2495794 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2499908 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2504028 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2504047 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2504679 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2505337 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2505876 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2506392 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2506403 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2506545 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2506799 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2506801 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2507403 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2507992 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2508646 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2509041 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2509053 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2509310 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2510330 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2511053 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2517140 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2548697 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2551841 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2553013 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2554239 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2554365 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2554492 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2554747 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2555193 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2556515 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2557379 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2557871 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2559675 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2560214 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2560784 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2563950 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2570001 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2572768 00:34:16.998 Removing: /var/run/dpdk/spdk_pid2576531 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2577475 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2578581 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2581294 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2583793 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2588164 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2588166 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2591202 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2591353 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2591488 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2591750 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2591755 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2594786 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2595120 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2598042 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2600535 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2604225 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2607902 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2615352 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2619846 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2619850 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2633723 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2634183 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2634715 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2635253 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2635840 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2636372 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2636911 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2637443 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2640095 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2640247 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2644161 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2644355 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2645963 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2651018 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2651023 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2654065 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2655464 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2656863 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2657599 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2659125 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2660114 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2665999 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2666321 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2666713 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2668279 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2668673 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2669049 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2671522 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2671532 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2673394 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2673889 00:34:17.258 Removing: /var/run/dpdk/spdk_pid2673957 00:34:17.258 Clean 00:34:17.258 14:28:34 -- common/autotest_common.sh@1451 -- # return 0 00:34:17.259 14:28:34 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:34:17.259 14:28:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:17.259 14:28:34 -- common/autotest_common.sh@10 -- # set +x 00:34:17.259 14:28:34 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:34:17.259 14:28:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:17.259 14:28:34 -- common/autotest_common.sh@10 -- # set +x 00:34:17.519 14:28:34 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:17.519 14:28:34 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:17.519 14:28:34 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:17.519 14:28:34 -- spdk/autotest.sh@395 -- # hash lcov 00:34:17.519 14:28:34 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:17.519 14:28:34 -- spdk/autotest.sh@397 -- # hostname 00:34:17.519 14:28:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:17.808 geninfo: WARNING: invalid characters removed from testname! 00:35:39.259 14:29:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:39.259 14:29:48 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:39.259 14:29:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:39.259 14:29:55 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:43.442 14:29:59 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:50.011 14:30:06 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.018 14:30:15 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:00.018 14:30:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.018 14:30:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:00.018 14:30:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.018 14:30:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.018 14:30:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.018 14:30:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.018 14:30:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.018 14:30:15 -- paths/export.sh@5 -- $ export PATH 00:36:00.018 14:30:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.018 14:30:15 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:00.018 14:30:15 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:00.018 14:30:15 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721997015.XXXXXX 00:36:00.018 14:30:15 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721997015.ajJr9H 00:36:00.018 14:30:15 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:00.018 14:30:15 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:36:00.018 14:30:15 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:36:00.018 14:30:15 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:00.018 14:30:15 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:00.018 14:30:15 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:00.018 14:30:15 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:36:00.018 14:30:15 -- common/autotest_common.sh@10 -- $ set +x 00:36:00.018 14:30:15 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:36:00.018 14:30:15 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:00.018 14:30:15 -- pm/common@17 -- $ local monitor 00:36:00.018 14:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:00.018 14:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:00.018 14:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:00.018 14:30:15 -- pm/common@21 -- $ date +%s 00:36:00.018 14:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:00.018 14:30:15 -- pm/common@21 -- $ date +%s 00:36:00.018 14:30:15 -- pm/common@25 -- $ sleep 1 00:36:00.018 14:30:15 -- pm/common@21 -- $ date +%s 00:36:00.018 14:30:15 -- pm/common@21 -- $ date +%s 00:36:00.018 14:30:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721997015 00:36:00.018 14:30:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721997015 00:36:00.018 14:30:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721997015 00:36:00.019 14:30:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721997015 00:36:00.019 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721997015_collect-vmstat.pm.log 00:36:00.019 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721997015_collect-cpu-load.pm.log 00:36:00.019 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721997015_collect-cpu-temp.pm.log 00:36:00.019 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721997015_collect-bmc-pm.bmc.pm.log 00:36:00.019 14:30:16 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:36:00.019 14:30:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:36:00.019 14:30:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.019 14:30:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:00.019 14:30:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:00.019 14:30:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:00.019 14:30:16 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:00.019 14:30:16 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:00.019 14:30:16 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:00.019 14:30:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:00.019 14:30:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:00.019 14:30:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:00.019 14:30:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:00.019 14:30:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:00.019 14:30:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:00.019 14:30:16 -- pm/common@44 -- $ pid=2684935 00:36:00.019 14:30:16 -- pm/common@50 -- $ kill -TERM 2684935 00:36:00.019 14:30:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:00.019 14:30:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:00.019 14:30:16 -- pm/common@44 -- $ pid=2684937 00:36:00.019 14:30:16 -- pm/common@50 -- $ kill -TERM 2684937 00:36:00.019 14:30:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:00.019 14:30:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:00.019 14:30:16 -- pm/common@44 -- $ pid=2684939 00:36:00.019 14:30:16 -- pm/common@50 -- $ kill -TERM 2684939 00:36:00.019 14:30:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:00.019 14:30:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:00.019 14:30:16 -- pm/common@44 -- $ pid=2684967 00:36:00.019 14:30:16 -- pm/common@50 -- $ sudo -E kill -TERM 2684967 00:36:00.019 + [[ -n 2305859 ]] 00:36:00.019 + sudo kill 2305859 00:36:00.029 [Pipeline] } 00:36:00.048 [Pipeline] // stage 00:36:00.054 [Pipeline] } 00:36:00.071 [Pipeline] // timeout 00:36:00.076 [Pipeline] } 00:36:00.093 [Pipeline] // catchError 00:36:00.099 [Pipeline] } 00:36:00.116 [Pipeline] // wrap 00:36:00.121 [Pipeline] } 00:36:00.137 [Pipeline] // catchError 00:36:00.146 [Pipeline] stage 00:36:00.149 [Pipeline] { (Epilogue) 00:36:00.163 [Pipeline] catchError 00:36:00.165 [Pipeline] { 00:36:00.179 [Pipeline] echo 00:36:00.181 Cleanup processes 00:36:00.187 [Pipeline] sh 00:36:00.469 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.469 2685067 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:00.469 2685199 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.482 [Pipeline] sh 00:36:00.764 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.764 ++ grep -v 'sudo pgrep' 00:36:00.764 ++ awk '{print $1}' 00:36:00.764 + sudo kill -9 2685067 00:36:00.775 [Pipeline] sh 00:36:01.056 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:15.952 [Pipeline] sh 00:36:16.235 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:16.494 Artifacts sizes are good 00:36:16.531 [Pipeline] archiveArtifacts 00:36:16.553 Archiving artifacts 00:36:16.804 [Pipeline] sh 00:36:17.090 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:17.105 [Pipeline] cleanWs 00:36:17.115 [WS-CLEANUP] Deleting project workspace... 00:36:17.115 [WS-CLEANUP] Deferred wipeout is used... 00:36:17.122 [WS-CLEANUP] done 00:36:17.123 [Pipeline] } 00:36:17.144 [Pipeline] // catchError 00:36:17.157 [Pipeline] sh 00:36:17.438 + logger -p user.info -t JENKINS-CI 00:36:17.447 [Pipeline] } 00:36:17.463 [Pipeline] // stage 00:36:17.469 [Pipeline] } 00:36:17.486 [Pipeline] // node 00:36:17.491 [Pipeline] End of Pipeline 00:36:17.524 Finished: SUCCESS